Coverage for estats/map.py: 74%
Shortcuts on this page
r m x toggle line displays
j k next/prev highlighted chunk
0 (zero) top of page
1 (one) first highlighted chunk
Shortcuts on this page
r m x toggle line displays
j k next/prev highlighted chunk
0 (zero) top of page
1 (one) first highlighted chunk
1# Copyright (C) 2019 ETH Zurich,
2# Institute for Particle Physics and Astrophysics
3# Author: Dominik Zuercher
5import numpy as np
6import healpy as hp
7import copy
8import os
10from ekit import context
11from estats import utils
12from ekit import paths
13from ekit import logger
15LOGGER = logger.init_logger(__name__)
18class map:
19 """
20 The map module handles shear and convergence maps and calculates summary
21 statistics from them.
23 The summary statistics are defined via plugins that are located in the
24 estats.stats folder. This allows users to easily add their own summary
25 statistic without having to modify the internals of the code.
26 See the :ref:`own_stat` Section to learn how to do that.
28 The summary statistics can be calculated from the shear maps, the
29 convergence maps or from smoothed convergence maps (multiscale approach
30 for extraction of non-Gaussian features).
32 If only one set of weak lensing maps is given the statistics will be
33 calculated from that set directly. If two sets are given and the name of
34 the statistic to be computed contains the word Cross both sets are passed
35 to the statistics plugin. This can be used to calculate cross-correlations
36 between maps from different tomographic bins for example.
37 In the case of a multiscale statictics the maps are convolved into a
38 cross-correlated map.
39 If the statitic to compute contains the word Full all maps are passed over.
41 The most important functionalities are:
43 - convert_convergence_to_shear:
45 Uses spherical Kaiser-Squires to convert the internal shear maps to
46 convergence maps. The maps are masked using the internal masks.
47 By default the trimmed mask is used to allow the user to disregard
48 pixels where the spherical harmonics transformation introduced large
49 errors.
51 - convert_shear_to_convergence:
53 Uses spherical Kaiser-Squires to convert the internal E-mode
54 convergence map to shear maps. The maps are masked using the internal
55 masks. By default the trimmed mask is used to allow the user to
56 disregard pixels where the spherical harmonics transformation
57 introduced large errors.
59 - smooth_maps:
61 Applies a Gaussian smoothing kernel to all internal convergence maps
62 (E- and B-modes). The fwhm parameter decides on the FWHM of the
63 Gaussian smoothing kernel. It is given in arcmin.
65 - calc_summary_stats:
67 Main functionality of the module. Allows to use statistics plugins
68 located in the estats.stats folder to calculate map based statistics.
70 See the :ref:`own_stat` Section to learn how the statistics plugins
71 look like and how to make your own one.
73 The summary statistics can be calculated from the shear maps, the
74 convergence maps or from a smoothed convergence maps (multiscale
75 approach for extraction of non-Gaussian features).
77 Instead of using the internal masks for masking, extra masks can be
78 used. This allows to use maps with multiple survey cutouts on it and
79 select a different cutout each time the function is called.
81 If use_shear_maps is set to True the function will convert the shear
82 maps into convergence maps using spherical Kaiser-Squires instead of
83 using the convergence maps directly.
85 If copy_obj is set to False, no copies of the internal maps are made.
86 This can save RAM but it also leads to the internal maps being
87 overwritten. If you wish to use the internal maps after the function
88 call set this to True!
90 By default the function returns the calculated statistics in a
91 dictionary. But if write_to_file is set to True it will append to
92 files that are defined using the defined_parameters,
93 undefined_parameters, output_dir and name arguments using ekit.paths.
95 The accepted keywords are:
97 - NSIDE:
99 default: 1024
101 choices: an integer being a power of 2
103 The Healpix resolution that is used to produce the map products.
105 - scales:
107 default:
108 [31.6, 29.0, 26.4, 23.7, 21.1, 18.5, 15.8, 13.2, 10.5, 7.9, 5.3, 2.6]
110 choices: A list of floats indicating the FWHM of the Gaussian
111 smoothing kernels to be applied in arcmin.
113 For summary statistics that are of the multi type (see :ref:`own_stat`)
114 the summary statistics are extracted from the convergence maps for
115 a number of scales (multiscale approach). To do so the maps are
116 smoothed with Gaussian smoothing kernels of different size.
117 The scales indicate the FWHM of these scales.
119 - polarizations:
121 default: 'A'
123 choices: 'E', 'B', 'A'
125 If E only returns E-mode statistics only when calc_summary_stats is
126 used.
127 If B only returns B-mode statistics only when calc_summary_stats is
128 used.
129 If A both E- and B-mode statistics are returned when calc_summary_stats
130 is used.
132 - prec:
134 default: 32
136 choices: 64, 32, 16
138 Size of the float values in the Healpix maps in bits. For less then 32
139 hp.UNSEEN is mapped to -inf -> No RAM optimization anymore
140 """
142 def __init__(self, gamma_1=[], gamma_2=[], kappa_E=[], kappa_B=[],
143 mask=[], trimmed_mask=[], weights=[],
144 context_in={}, fast_mode=False, verbosity=3, **kwargs):
145 """
146 Initialization function for the map class.
147 For gammas, kappas, weights, masks and trimmed_masks can also pass
148 multiple instances in a list.
150 :param gamma_1: Map for first shear component
151 :param gamma_2: Map for binond shear component
152 :param kappa_E: Map for convergence E-modes
153 :param kappa_B: Map for convergence B-modes
154 :param mask: Optionally can provide a mask that is applied to the maps.
155 :param trimmed_mask: Optionally can provide a stricter mask that is
156 applied to the convergence maps if they are
157 generated from the shear maps.
158 :param weights: Can provide a map conatining the pixel weights.
159 :param context_in: A dictionary containing parameters used by class.
160 :param verbosity: Verbosity level (0-4)
161 """
163 logger.set_logger_level(LOGGER, verbosity)
165 LOGGER.debug("Initializing map object")
167 # setup context
168 allowed = ['NSIDE', 'scales', 'polarizations', 'prec',
169 "alpha_rotations", "dec_rotations", "mirror"]
171 types = ['int', 'list', 'str', 'int', 'list', 'list', 'list']
173 defaults = [1024, [31.6, 29.0, 26.4, 23.7, 21.1, 18.5, 15.8, 13.2,
174 10.5, 7.9, 5.3, 2.6], 'A', 32,
175 [0], [0], [False]]
177 allowed, types, defaults = utils._get_plugin_contexts(
178 allowed, types, defaults)
179 self.ctx = context.setup_context(
180 {**context_in, **kwargs}, allowed, types, defaults,
181 verbosity=verbosity)
183 self.ctx['prec'] = 'float{}'.format(self.ctx['prec'])
185 if fast_mode:
186 LOGGER.warning(
187 "Fast mode enabled -> Setting lmax to 2 * "
188 "NSIDE instead of 3 * NSIDE - 1!")
189 self.lmax = 2 * self.ctx['NSIDE']
190 else:
191 self.lmax = 3 * self.ctx['NSIDE'] - 1
193 weights = list(weights)
195 mask = list(mask)
196 trimmed_mask = list(trimmed_mask)
198 gamma_1 = list(gamma_1)
199 gamma_2 = list(gamma_2)
201 kappa_E = list(kappa_E)
202 kappa_B = list(kappa_B)
204 self.mask = {}
205 for ii, m in enumerate(mask):
206 try:
207 len(m)
208 self.set_mask(m[:], bin=ii, apply=False)
209 except TypeError:
210 # if object has no length -> not a list of lists but single
211 self.set_mask(mask[:], bin=0, apply=False)
212 break
213 LOGGER.debug(f"Received {len(self.mask.keys())} masks")
215 self.trimmed_mask = {}
216 for ii, m in enumerate(trimmed_mask):
217 try:
218 len(m)
219 self.set_trimmed_mask(m[:], bin=ii, apply=False)
220 except TypeError:
221 self.set_trimmed_mask(trimmed_mask[:], bin=0, apply=False)
222 break
223 LOGGER.debug(f"Received {len(self.trimmed_mask.keys())} trimmed masks")
225 self.weights = {}
226 for ii, m in enumerate(weights):
227 try:
228 len(m)
229 self.set_weights(m[:], bin=ii)
230 except TypeError:
231 self.set_weights(weights[:], bin=0)
232 break
233 LOGGER.debug(f"Received {len(self.weights.keys())} weight maps")
235 self.gamma_1 = {}
236 self.gamma_2 = {}
237 for ii, m in enumerate(gamma_1):
238 try:
239 len(m)
240 self.set_shear_maps(m[:], gamma_2[ii][:], bin=ii)
241 except TypeError:
242 self.set_shear_maps(gamma_1[:], gamma_2[:], bin=0)
243 break
244 LOGGER.debug(f"Received {len(self.gamma_1.keys())} shear maps")
246 self.kappa_E = {}
247 for ii, m in enumerate(kappa_E):
248 try:
249 len(m)
250 self.set_convergence_maps(m[:], None, bin=ii)
251 except TypeError:
252 self.set_convergence_maps(kappa_E[:], None, bin=0)
253 break
254 LOGGER.debug(
255 f"Received {len(self.kappa_E.keys())} E mode convergence maps")
257 self.kappa_B = {}
258 for ii, m in enumerate(kappa_B):
259 try:
260 len(m)
261 self.set_convergence_maps(None, m[:], bin=ii)
262 except TypeError:
263 self.set_convergence_maps(None, kappa_B[:], bin=0)
264 break
265 LOGGER.debug(
266 f"Received {len(self.kappa_B.keys())} B mode convergence maps")
268 # ACCESING OBJECTS
270 def get_shear_maps(self, trimming=False, bin=0, use_B_modes=False):
271 """
272 Returns the shear maps. If they are not set tries to
273 calculate from internal convergence maps.
275 :param trimming: If True apply trimmed mask instead of normal mask to
276 get rid of pixels close to the edges of the mask.
277 If False just the normal mask is applied.
278 :param bin: Index of the map instance (starting at 0)
279 :param use_B_modes: If True uses B mode convergence maps
280 instead of E modes.
281 """
282 if len(self.gamma_1.keys()) == 0:
283 LOGGER.warning(
284 "No shear maps set. Attempting to calculate them from "
285 "the convergence maps.")
286 gamma_1, gamma_2 = self._convert_kappa_to_gamma(
287 bin=bin, trimming=trimming, use_B_modes=use_B_modes)
288 elif (len(self.gamma_1[bin]) == 0) | (len(self.gamma_2[bin]) == 0):
289 LOGGER.warning(
290 "No shear maps set. Attempting to calculate them from "
291 "the convergence maps.")
292 gamma_1, gamma_2 = self._convert_kappa_to_gamma(
293 bin=bin, trimming=trimming, use_B_modes=use_B_modes)
294 else:
295 gamma_1 = self.gamma_1[bin]
296 gamma_2 = self.gamma_2[bin]
297 return (gamma_1[:], gamma_2[:])
299 def get_weights(self, bin=0):
300 """
301 Returns the weight maps
303 :param bin: Index of the map instance (starting at 0)
304 """
305 if bin not in self.weights.keys():
306 raise IndexError(f"weights object instance {bin} not set.")
307 return self.weights[bin][:]
309 def get_mask(self, bin=0):
310 """
311 Returns the mask
313 :param bin: Index of the map instance (starting at 0)
314 """
315 if bin not in self.mask.keys():
316 raise IndexError(f"mask object instance {bin} not set.")
317 return self.mask[bin][:]
319 def get_trimmed_mask(self, bin=0):
320 """
321 Returns the trimmed mask
323 :param bin: Index of the map instance (starting at 0)
324 """
325 if bin not in self.trimmed_mask.keys():
326 raise IndexError(f"trimmed mask object instance {bin} not set.")
327 return self.trimmed_mask[bin][:]
329 def get_convergence_maps(self, trimming=False, bin=0):
330 """
331 Returns the convergence maps (E and B mode maps).
332 If not set tries to calculate from the internal shear maps.
334 :param trimming: If True apply trimmed mask instead of normal mask to
335 get rid of pixels close to mask edge.
336 If False just the normal mask is applied.
337 :param bin: Index of the map instance (starting at 0)
338 """
339 if len(self.kappa_E.keys()) == 0:
340 LOGGER.warning(
341 "Convergence maps not set. Calculating from "
342 "internal shear maps.")
343 kappa_E, kappa_B = self._convert_gamma_to_kappa(
344 bin=bin, trimming=trimming)
345 elif (len(self.kappa_E[bin]) == 0) & (len(self.kappa_B[bin]) == 0):
346 LOGGER.warning(
347 "Convergence maps not set. Calculating from "
348 "internal shear maps.")
349 kappa_E, kappa_B = self._convert_gamma_to_kappa(
350 bin=bin, trimming=trimming)
351 elif len(self.kappa_E[bin]) == 0:
352 LOGGER.warning("Only B mode convergence map found.")
353 kappa_B = self.kappa_B[bin]
354 return kappa_B
355 elif len(self.kappa_B[bin]) == 0:
356 kappa_E = self.kappa_E[bin]
357 return kappa_E
358 else:
359 kappa_E = self.kappa_E[bin]
360 kappa_B = self.kappa_B[bin]
361 return (kappa_E[:], kappa_B[:])
363 # SETTING OBJECTS
365 def set_shear_maps(self, shear_1=[], shear_2=[], bin=0):
366 """
367 Sets the shear maps
369 :param shear_1: First shear map component
370 :param shear_2: binond shear map component
371 :param bin: Index of the map instance (starting at 0)
372 """
373 shear_1 = np.asarray(shear_1, dtype=self.ctx['prec'])
374 shear_2 = np.asarray(shear_2, dtype=self.ctx['prec'])
376 self.gamma_1[bin] = self._apply_mask(
377 shear_1, bin=bin, obj_name='shear 1, instance {}'.format(bin))
378 self.gamma_2[bin] = self._apply_mask(
379 shear_2, bin=bin, obj_name='shear 2, intance {}'.format(bin))
380 LOGGER.debug("Set shear maps for instance {}".format(bin))
382 def set_convergence_maps(self, kappa_E=[], kappa_B=[], bin=0):
383 """
384 Sets the convergence E and B maps
386 :param kappa_E: E mode convergence map
387 :param kappa_B: B mode convergence map
388 :param bin: Index of the map instance (starting at 0)
389 """
390 if kappa_E is not None:
391 kappa_E = np.asarray(kappa_E, dtype=self.ctx['prec'])
392 self.kappa_E[bin] = self._apply_mask(
393 kappa_E,
394 obj_name='convergence E-modes, instance {}'.format(bin),
395 bin=bin)
396 if kappa_B is not None:
397 kappa_B = np.asarray(kappa_B, dtype=self.ctx['prec'])
398 self.kappa_B[bin] = self._apply_mask(
399 kappa_B,
400 obj_name='convergence B-modes, instance {}'.format(bin),
401 bin=bin)
402 LOGGER.debug("Set convergence maps for instance {}".format(bin))
404 def set_weights(self, weights=[], bin=0):
405 """
406 Sets the weight map
408 :param weights: The weight map
409 :param bin: Index of the map instance (starting at 0)
410 """
411 weights = np.asarray(weights, dtype=self.ctx['prec'])
412 self.weights[bin] = self._apply_mask(
413 weights, obj_name='weights, instance {}'.format(bin), bin=bin)
414 LOGGER.debug("Set weights for instance{}".format(bin))
416 def set_mask(self, mask=[], bin=0, apply=True):
417 """
418 Sets the mask
420 :param mask: The mask to set
421 :param bin: Index of the map instance (starting at 0)
422 :param apply: If True directly applies the set mask to the weights,
423 convergence maps and shear maps
424 """
425 mask = np.asarray(mask, dtype=bool)
427 self.mask[bin] = mask
428 LOGGER.debug("Set mask for instance {}".format(bin))
429 if apply:
430 LOGGER.info(
431 f"Applying internal mask for instance {bin} "
432 f"to all maps of instance {bin}")
433 self.weights[bin] = self._apply_mask(
434 self.weights[bin], bin=bin,
435 obj_name='weights, instance {}'.format(bin))
436 self.gamma_1[bin] = self._apply_mask(
437 self.gamma_1[bin], bin=bin,
438 obj_name='shear 1, instance {}'.format(bin))
439 self.gamma_2[bin] = self._apply_mask(
440 self.gamma_2[bin], bin=bin,
441 obj_name='shear 2. instance {}'.format(bin))
442 self.kappa_E[bin] = self._apply_mask(
443 self.kappa_E[bin], bin=bin,
444 obj_name='convergence E, instance {}'.format(bin))
445 self.kappa_B[bin] = self._apply_mask(
446 self.kappa_B[bin], bin=bin,
447 obj_name='convergence B, instance {}'.format(bin))
449 def set_trimmed_mask(self, trimmed_mask=[], bin=0, apply=False):
450 """
451 Sets the trimmed mask
453 :param trimmed_mask: The trimmed mask to set
454 :param bin: Index of the map instance (starting at 0)
455 :param apply: If True directly applies the set mask to the weights,
456 convergence maps and shear maps
457 """
458 mask = np.asarray(trimmed_mask, dtype=bool)
460 self.trimmed_mask[bin] = mask
461 LOGGER.debug("Set trimmed mask for instance {}".format(bin))
462 if apply:
463 LOGGER.info(
464 f"Applying internal trimmed mask for instance {bin} "
465 f"to all maps of instance {bin}")
466 self.weights[bin] = self._apply_mask(
467 self.weights[bin], bin=bin,
468 obj_name='weights, instance {}'.format(bin),
469 trimming=True)
470 self.gamma_1[bin] = self._apply_mask(
471 self.gamma_1[bin], bin=bin,
472 obj_name='shear 1, instance {}'.format(bin),
473 trimming=True)
474 self.gamma_2[bin] = self._apply_mask(
475 self.gamma_2[bin], bin=bin,
476 obj_name='shear 2, instance {}'.format(bin),
477 trimming=True)
478 self.kappa_E[bin] = self._apply_mask(
479 self.kappa_E[bin], bin=bin,
480 obj_name='convergence E, instance {}'.format(bin),
481 trimming=True)
482 self.kappa_B[bin] = self._apply_mask(
483 self.kappa_B[bin], bin=bin,
484 obj_name='convergence B,, instance {}'.format(bin),
485 trimming=True)
487 # METHODS
488 def convert_shear_to_convergence(self, bin=0, trimming=True):
489 """
490 Calculates convergence maps from the shear maps and sets them
491 (retrieve with get_convergence_maps).
493 :param bin: Index of the map instance (starting at 0)
494 :param trimming: If True apply trimmed mask instead of normal mask to
495 get rid of pixels close to mask edge.
496 """
497 kappa_E, kappa_B = self._convert_gamma_to_kappa(
498 bin=bin, trimming=trimming)
499 self.kappa_E[bin] = kappa_E
500 self.kappa_B[bin] = kappa_B
502 def convert_convergence_to_shear(self, bin=0, trimming=True,
503 use_B_modes=False):
504 """
505 Calculates shear maps from the convergence maps and sets them
506 (retrieve with get_shear_maps).
508 :param bin: Index of the map instance (starting at 0)
509 :param trimming: If True apply trimmed mask instead of normal mask to
510 get rid of pixels close to mask edge.
511 :param use_B_modes: If True uses B mode convergence maps
512 instead of E modes.
513 """
514 gamma_1, gamma_2 = self._convert_kappa_to_gamma(
515 bin=bin, trimming=trimming, use_B_modes=use_B_modes)
517 self.gamma_1[bin] = gamma_1
518 self.gamma_2[bin] = gamma_2
520 def smooth_maps(self, fwhm=0.0, bin=0):
521 """
522 Smooth the convergence maps with a Gaussian kernel
524 :param fwhm: The FWHM of the smoothing kernel in arcmins
525 :param bin: Index of the map instance (starting at 0)
526 """
528 if bin not in self.kappa_E.keys():
529 raise IndexError(
530 f"convergence E mode object instance {bin} not set.")
531 if bin not in self.kappa_B.keys():
532 raise IndexError(
533 f"convergence B mode object instance {bin} not set.")
534 kappa_E = self.kappa_E[bin]
535 kappa_B = self.kappa_B[bin]
537 if bin not in self.kappa_E.keys():
538 LOGGER.warning(
539 f"No E mode convergence map for instance {bin} "
540 "found. Not smoothing.")
541 else:
542 kappa_E = self.kappa_E[bin]
543 if fwhm > 0.0:
544 kappa_E = np.asarray(
545 hp.sphtfunc.smoothing(
546 kappa_E,
547 fwhm=np.radians(float(fwhm) / 60.), lmax=self.lmax),
548 dtype=self.ctx['prec'])
549 kappa_E = self._apply_mask(
550 kappa_E, bin=bin, obj_name='convegence E-modes')
551 self.kappa_E[bin] = kappa_E
552 LOGGER.info(
553 f"Smoothed E mode convergence maps "
554 f"for instance {bin} with a Gaussian "
555 f"smoothing kernel with a FWHM of {fwhm} arcmin")
557 if bin not in self.kappa_B.keys():
558 LOGGER.warning(
559 f"No B mode convergence map for instance {bin} "
560 "found. Not smoothing.")
561 else:
562 kappa_B = self.kappa_B[bin]
563 if fwhm > 0.0:
564 kappa_B = np.asarray(
565 hp.sphtfunc.smoothing(
566 kappa_B,
567 fwhm=np.radians(float(fwhm) / 60.), lmax=self.lmax),
568 dtype=self.ctx['prec'])
569 kappa_B = self._apply_mask(
570 kappa_B, bin=bin, obj_name='convegence B-modes')
571 self.kappa_B[bin] = kappa_B
572 LOGGER.info(
573 f"Smoothed B mode convergence maps "
574 f"for instance {bin} with a Gaussian "
575 f"smoothing kernel with a FWHM of {fwhm} arcmin")
577 def calc_summary_stats(self, statistics, extra_mask={},
578 extra_trimmed_mask={}, scales=[],
579 use_shear_maps=False, use_kappa_maps=False,
580 output_dir='.',
581 defined_parameters={},
582 undefined_parameters=[],
583 name='', copy_obj=True, write_to_file=False,
584 method='KS', cross_bins=[0, 1], bin=0,
585 trimming=True):
586 """
587 Calculates the summary statistics from the maps
588 Can provide a mask or trimmed_mask which will be used instead of the
589 internal masks, allowing to store multiple cutouts on a single map
590 (can save memory)
592 :param statistics: Statistcs to calculate.
593 :param mask: Can provide an additional mask, otherwise internal mask
594 used
595 :param trimmed_mask: Can provide an additional trimmed mask
596 :param scales: Smoothing scales to apply for multiscale analysis for
597 Non-Gaussian statistics (multi type) (FWHM in arcmins).
598 If not given uses internal scales from context
599 :param use_shear_maps: If set to True will calculate convergence maps
600 from shear maps for convergence map based
601 statistics. Otherwise uses
602 the convergence maps directly.
603 :param use_kappa_maps: If set to True will calculate shear maps
604 from kappa maps for shear map based
605 statistics. Otherwise uses
606 the shear maps directly.
607 :param output_dir: If write_to_file is True used to build the output
608 path. See ekit docs.
609 :param defined_parameters: If write_to_file is True used to build the
610 output path. See ekit docs
611 :param undefined_parameters: If write_to_file is True used to build
612 the output path. See ekit docs
613 :param name: If write_to_file is True used to build the output
614 path. See ekit docs
615 :param copy_obj: If True preserves the map objects, otherwise
616 overwrites them with masked maps in the extraction
617 process
618 :param write_to_file: If True appends to files directly instead of
619 returning the results as a dictionary
620 :param method: Mass mapping method.
621 Currently only Kaiser-Squires supported.
622 :param cross_bins: For cross-statistics can indicate which map
623 instances to use.
624 :param bin: For single bin statistics can indicate which map
625 instance to use.
626 :param trimming: If True uses trimmed mask for convergence statistics.
627 """
629 if len(statistics) == 0:
630 LOGGER.warning("Nothing to compute")
631 return
633 if copy_obj:
634 kappa_E_save = copy.deepcopy(self.kappa_E)
635 kappa_B_save = copy.deepcopy(self.kappa_B)
636 gamma_1_save = copy.deepcopy(self.gamma_1)
637 gamma_2_save = copy.deepcopy(self.gamma_2)
638 weights_save = copy.deepcopy(self.weights)
639 else:
640 LOGGER.warning(
641 "Using internal map objects directly and potentially "
642 "overwriting them! Do not use this mode if you wish to "
643 "further use the map objects of this map instance. "
644 "Also make sure that your statistic plugins do not overwrite "
645 "maps.")
647 if len(scales) == 0:
648 LOGGER.debug("Using internal scales")
649 scales = self.ctx['scales']
651 if 'tomo' in defined_parameters.keys():
652 self.ctx['tomo'] = defined_parameters['tomo']
653 else:
654 self.ctx['tomo'] = '0x0'
656 stats = self._get_pol_stats(statistics)
658 # divide statistics into classes
659 statistic_divided = self._divide_statistics(stats)
660 check_shear_maps = False
661 check_kappa_maps = False
662 for stat_set in statistic_divided['E'].items():
663 LOGGER.debug(
664 f"Got statistics {stat_set[1]} "
665 f"for statistic type {stat_set[0]}")
666 if (stat_set[0] == 'shear'):
667 if (len(stat_set[1]) > 0):
668 if use_kappa_maps:
669 check_kappa_maps = True
670 else:
671 check_shear_maps = True
672 else:
673 if (len(stat_set[1]) > 0):
674 if use_shear_maps:
675 check_shear_maps = True
676 else:
677 check_kappa_maps = True
679 # import statistics plugins
680 plugins = {}
681 for statistic in stats['E']:
682 LOGGER.debug(f"Importing statistics plugin {statistic}")
683 plugins[statistic] = utils.import_executable(
684 statistic, statistic)
686 # set polarizations
687 polarizations = []
688 if (self.ctx['polarizations'] == 'A') | \
689 (self.ctx['polarizations'] == 'E'):
690 polarizations.append('E')
691 if (self.ctx['polarizations'] == 'A') | \
692 (self.ctx['polarizations'] == 'B'):
693 polarizations.append('B')
695 # decide how many maps need to be ran
696 full = False
697 cross = False
698 for stat in statistics:
699 if 'Cross' in stat:
700 cross = True
701 if 'Full' in stat:
702 full = True
703 if cross | full:
704 if full:
705 if check_shear_maps:
706 map_keys_shear = np.asarray(
707 list(self.gamma_1.keys()), dtype=int)
708 else:
709 map_keys_shear = np.zeros(0, dtype=int)
710 if check_kappa_maps:
711 map_keys_kappa = np.asarray(
712 list(self.kappa_E.keys()), dtype=int)
713 else:
714 map_keys_kappa = np.zeros(0, dtype=int)
715 map_keys = np.append(
716 map_keys_shear, map_keys_kappa)
717 map_keys = np.unique(map_keys)
718 else:
719 map_keys = np.asarray(cross_bins, dtype=int)
720 else:
721 map_keys = np.asarray([bin], dtype=int)
723 # check if required maps are available
724 if check_shear_maps:
725 for key in map_keys:
726 if key not in self.gamma_1.keys():
727 raise Exception(
728 f"Shear maps for instance {key} not available. "
729 "Cannot calculate statistics.")
730 if key not in self.gamma_2.keys():
731 raise Exception(
732 f"Shear maps for instance {key} not available. "
733 "Cannot calculate statistics.")
734 if check_kappa_maps:
735 for key in map_keys:
736 if 'E' in polarizations:
737 if key not in self.kappa_E.keys():
738 raise Exception(
739 f"Convergence E mode map for instance {key} "
740 "not available."
741 "Pass them on initialization or "
742 "set use_shear_maps=True.")
743 if 'B' in polarizations:
744 if key not in self.kappa_B.keys():
745 raise Exception(
746 f"Convergence B mode map for instance {key} "
747 "not available."
748 "Pass them on initialization or "
749 "set use_shear_maps=True.")
751 mask = {}
752 if not isinstance(extra_mask, dict):
753 # if array was passed
754 mask[0] = extra_mask
755 else:
756 for key in extra_mask.keys():
757 mask[key] = extra_mask[key]
758 for key in map_keys:
759 if key not in mask.keys():
760 mask[key] = []
762 trimmed_mask = {}
763 if not isinstance(extra_trimmed_mask, dict):
764 # if array was passed
765 trimmed_mask[0] = extra_trimmed_mask
766 else:
767 for key in extra_trimmed_mask.keys():
768 trimmed_mask[key] = extra_trimmed_mask[key]
769 for key in map_keys:
770 if key not in trimmed_mask.keys():
771 trimmed_mask[key] = []
773 # first masking
774 if check_shear_maps:
775 for key in map_keys:
776 self.gamma_1[key] = self._apply_mask(
777 self.gamma_1[key], mask=mask[key], bin=key,
778 obj_name=f'shear 1 instance {key}')
779 self.gamma_2[key] = self._apply_mask(
780 self.gamma_2[key], mask=mask[key], bin=key,
781 obj_name=f'shear 2, instance {key}')
782 if check_kappa_maps:
783 for key in map_keys:
784 if 'E' in polarizations:
785 self.kappa_E[key] = self._apply_mask(
786 self.kappa_E[key], mask=mask[key],
787 obj_name=f'convergence E modes instance {key}',
788 trimming=False)
789 if 'B' in polarizations:
790 self.kappa_B[key] = self._apply_mask(
791 self.kappa_B[key], mask=mask[key],
792 obj_name=f'convergence B modes instance {key}',
793 trimming=False)
795 # masking weights
796 for key in map_keys:
797 if key not in self.weights.keys():
798 # if weight map does not exist create equally weighted map
799 LOGGER.warning(
800 f"Weights not set for instance {key}. "
801 "Using equal weighting.")
802 if check_shear_maps:
803 weights = np.zeros_like(self.gamma_1[0], dtype=bool)
804 weights[self.gamma_1[0] > hp.UNSEEN] = True
805 else:
806 if 'E' in polarizations:
807 weights = np.zeros_like(self.kappa_E[0], dtype=bool)
808 weights[self.kappa_E[0] > hp.UNSEEN] = True
809 else:
810 weights = np.zeros_like(self.kappa_B[0], dtype=bool)
811 weights[self.kappa_B[0] > hp.UNSEEN] = True
812 self.weights[key] = weights
813 self.weights[key] = self._apply_mask(
814 self.weights[key], mask=mask[key],
815 obj_name=f'weights, instance {key}',
816 trimming=False)
818 # collector array for outputs
819 outs = {}
821 ##################
822 # shear statistics
823 ##################
824 if len(statistic_divided['E']['shear']) > 0:
825 LOGGER.debug("Calculating shear statistics")
826 outs = self._calc_shear_stats(statistic_divided['E']['shear'],
827 plugins, outs, polarizations,
828 mask, trimmed_mask,
829 write_to_file, name,
830 output_dir, defined_parameters,
831 undefined_parameters, map_keys,
832 use_shear_maps, cross_bins, bin,
833 use_kappa_maps, trimming)
835 kappa_stats = []
836 for pol in polarizations:
837 kappa_stats += statistic_divided[pol]['convergence']
838 kappa_stats += statistic_divided[pol]['convergence-cross']
839 kappa_stats += statistic_divided[pol]['multi']
840 full_mode = False
841 cross_mode = False
842 for stat in kappa_stats:
843 if 'Cross' in stat:
844 cross_mode = True
845 if 'Full' in stat:
846 full_mode = True
848 if len(kappa_stats) > 0:
849 LOGGER.debug("Preparing alms")
850 alm = self._prep_alms(trimmed_mask, polarizations,
851 use_shear_maps, method=method,
852 map_keys=map_keys,
853 cross_bins=cross_bins, bin=bin,
854 full_mode=full_mode, cross_mode=cross_mode,
855 trimming=trimming)
856 for pol in polarizations:
857 ########################
858 # convergence statistics
859 ########################
860 if (len(statistic_divided[pol]['convergence']) > 0):
861 LOGGER.debug(
862 "Calculating convergence statistics for polarization "
863 "{}".format(pol))
864 outs = self._calc_convergence_stats(
865 outs, plugins, alm[pol],
866 trimmed_mask, mask,
867 defined_parameters,
868 undefined_parameters, output_dir,
869 statistic_divided[pol]['convergence'],
870 pol, write_to_file, name, map_keys, cross_bins, bin,
871 trimming)
873 if (len(statistic_divided[pol]['convergence-cross']) > 0):
874 LOGGER.debug(
875 "Calculating cross-convergence statistics for "
876 "polarization {}".format(pol))
877 outs = self._calc_cross_convergence_stats(
878 outs, plugins, alm[pol],
879 trimmed_mask, mask,
880 defined_parameters,
881 undefined_parameters, output_dir,
882 statistic_divided[pol]['convergence-cross'],
883 pol, write_to_file, name, cross_bins, trimming)
885 ###################################
886 # multiscale convergence statistics
887 ###################################
888 if len(statistic_divided[pol]['multi']) > 0:
889 LOGGER.debug(
890 "Calculating multi statistics for polarization "
891 "{}".format(pol))
892 outs = self._calc_multi_stats(
893 outs, scales, plugins, alm[pol],
894 trimmed_mask, mask,
895 defined_parameters,
896 undefined_parameters, output_dir,
897 statistic_divided[pol]['multi'],
898 pol, write_to_file, name, map_keys,
899 statistics, cross_bins, bin, trimming)
901 # restore
902 if copy_obj:
903 self.kappa_E = kappa_E_save
904 self.kappa_B = kappa_B_save
905 self.gamma_2 = gamma_1_save
906 self.gamma_1 = gamma_2_save
907 self.weights = weights_save
908 return outs
910 ##################################
911 # HELPER FUNCTIONS
912 ##################################
914 def _stack_stats(self, stat_out, pol, statistic, all_statistics, outs,
915 store_to_files=False, name='', output_dir='',
916 defined_parameters={}, undefined_parameters=[]):
917 """
918 Stack the extracted summary statistics for the output
919 """
920 # Init Polariazation dict if non existent
921 if pol not in outs.keys():
922 outs[pol] = {}
924 if statistic == 'Extremes':
925 if ('Peaks' in all_statistics) | ('2PCF' in all_statistics):
926 out = stat_out[0:2]
927 stat = 'Peaks'
928 outs = self._stack_stats(out, pol, stat, all_statistics, outs,
929 store_to_files, name, output_dir,
930 defined_parameters,
931 undefined_parameters)
932 if ('Voids' in all_statistics) | ('2VCF' in all_statistics):
933 out = stat_out[2:4]
934 stat = 'Voids'
935 outs = self._stack_stats(out, pol, stat, all_statistics, outs,
936 store_to_files, name, output_dir,
937 defined_parameters,
938 undefined_parameters)
939 elif (statistic == 'Peaks') & (len(stat_out) == 2):
940 if 'Peaks' in all_statistics:
941 out = stat_out[0]
942 stat = 'Peaks'
943 outs = self._stack_stats(out, pol, stat, all_statistics, outs,
944 store_to_files, name, output_dir,
945 defined_parameters,
946 undefined_parameters)
947 if '2PCF' in all_statistics:
948 out = stat_out[1]
949 stat = '2PCF'
950 outs = self._stack_stats(out, pol, stat, all_statistics, outs,
951 store_to_files, name, output_dir,
952 defined_parameters,
953 undefined_parameters)
954 elif (statistic == 'Voids') & (len(stat_out) == 2):
955 if 'Voids' in all_statistics:
956 out = stat_out[0]
957 stat = 'Voids'
958 outs = self._stack_stats(out, pol, stat, all_statistics, outs,
959 store_to_files, name, output_dir,
960 defined_parameters,
961 undefined_parameters)
962 if '2PCF' in all_statistics:
963 out = stat_out[1]
964 stat = '2VCF'
965 outs = self._stack_stats(out, pol, stat, all_statistics, outs,
966 store_to_files, name, output_dir,
967 defined_parameters,
968 undefined_parameters)
969 else:
970 if store_to_files:
971 # adding a separation value
972 if (statistic == '2PCF') | (statistic == '2PCF'):
973 stat_out = np.vstack(([[-999.0, -999.0]], stat_out))
974 # saving
975 output_path = paths.create_path(
976 name,
977 output_dir, {
978 **defined_parameters,
979 **{'mode': pol,
980 'stat': statistic}},
981 undefined_parameters,
982 suffix='.npy')
983 if os.path.exists(output_path):
984 out_file = np.load(output_path)
985 out_file = np.vstack(
986 (out_file, stat_out))
987 else:
988 out_file = stat_out
989 np.save(output_path, out_file)
990 else:
991 # Init Statistics dict if non existent
992 if statistic not in outs[pol].keys():
993 outs[pol][statistic] = stat_out
994 else:
995 # adding a separation value
996 if (statistic == '2PCF') | (statistic == '2PCF'):
997 stat_out = np.vstack(([[-999.0, -999.0]], stat_out))
999 outs[pol][statistic] = np.vstack(
1000 (outs[pol][statistic], stat_out))
1001 return outs
1003 def _convert_alm_to_kappa(self, alm, fwhm):
1004 """
1005 Converts spherical harmonics to E and B modes Convergence maps.
1006 Can also apply smoothing.
1007 """
1008 # smoothing alms with gaussian kernel
1009 if fwhm > 0.0:
1010 alm_ = hp.sphtfunc.smoothalm(
1011 alm, fwhm=np.radians(float(fwhm) / 60.), inplace=False)
1012 else:
1013 alm_ = copy.copy(alm)
1014 kappa = np.asarray(
1015 hp.alm2map(alm_, nside=self.ctx['NSIDE'], lmax=self.lmax),
1016 dtype=self.ctx['prec'])
1017 return kappa
1019 def _convert_gamma_to_kappa(self, bin=0, trimming=False, method='KS'):
1020 """
1021 Converts two shear maps to convergence maps (E and B modes).
1022 """
1023 # Do not touch
1024 sign_flip = True
1026 if bin not in self.gamma_1.keys():
1027 raise IndexError(f"shear 1 object instance {bin} not set.")
1028 gamma_1 = self.gamma_1[bin]
1029 if bin not in self.gamma_2.keys():
1030 raise IndexError(f"shear 2 object instance {bin} not set.")
1031 gamma_2 = self.gamma_2[bin]
1033 # spherical harmonics decomposition
1034 if sign_flip & (len(gamma_1) > 0):
1035 LOGGER.debug("Applying sign flip")
1036 gamma_1[gamma_1 > hp.UNSEEN] *= -1.
1037 alm_E, alm_B = utils._calc_alms(
1038 gamma_1, gamma_2, method=method, lmax=self.lmax)
1040 if sign_flip & (len(gamma_1) > 0):
1041 gamma_1[gamma_1 > hp.UNSEEN] *= -1.
1043 kappa_E = np.asarray(self._convert_alm_to_kappa(alm_E, fwhm=0.0),
1044 dtype=self.ctx['prec'])
1045 kappa_B = np.asarray(self._convert_alm_to_kappa(alm_B, fwhm=0.0),
1046 dtype=self.ctx['prec'])
1048 kappa_E = self._apply_mask(
1049 kappa_E, bin=bin, trimming=trimming,
1050 obj_name=f'convergence E-modes, instance {bin}')
1051 kappa_B = self._apply_mask(
1052 kappa_B, bin=bin, trimming=trimming,
1053 obj_name=f'convergence B-modes, instance {bin}')
1055 return kappa_E, kappa_B
1057 def _convert_kappa_to_gamma(self, bin=0, trimming=False,
1058 use_B_modes=False):
1059 """
1060 Converts a kappa map to two shear maps using Kaiser-Squires
1061 """
1063 # Do not touch
1064 sign_flip = True
1066 if use_B_modes:
1067 if bin not in self.kappa_B.keys():
1068 raise IndexError(
1069 f"Convergence B mode object instance {bin} not set.")
1070 kappa = self.kappa_B[bin]
1071 else:
1072 if bin not in self.kappa_E.keys():
1073 raise IndexError(
1074 f"Convergence E mode object instance {bin} not set.")
1075 kappa = self.kappa_E[bin]
1077 # spherical harmonics decomposition
1078 kappa_alm = hp.map2alm(kappa, lmax=self.lmax)
1079 ell = hp.Alm.getlm(self.lmax)[0]
1080 ell[ell == 0] = 1
1082 # Add the apropriate factor to the kappa_alm
1083 fac = np.where(
1084 np.logical_and(ell != 1, ell != 0),
1085 - np.sqrt(((ell + 2.0) * (ell - 1)) / ((ell + 1) * ell)), 0)
1086 kappa_alm *= fac
1088 # Spin spherical harmonics
1089 # Q and U are the real and imaginary parts of the shear map
1090 T, Q, U = hp.alm2map([np.zeros_like(kappa_alm), kappa_alm,
1091 np.zeros_like(kappa_alm)],
1092 nside=self.ctx['NSIDE'], lmax=self.lmax)
1093 Q = np.asarray(Q, dtype=self.ctx['prec'])
1094 U = np.asarray(U, dtype=self.ctx['prec'])
1096 # - sign accounts for the Healpix sign flip
1097 if sign_flip:
1098 Q = -1. * Q
1100 gamma_1 = self._apply_mask(
1101 Q, trimming=trimming, bin=bin, obj_name=f'shear 1, instance {bin}')
1102 gamma_2 = self._apply_mask(
1103 U, trimming=trimming, bin=bin, obj_name=f'shear 2, instance {bin}')
1105 return (gamma_1, gamma_2)
1107 def _apply_mask(self, obj, bin=0, trimming=False, mask=[],
1108 obj_name=''):
1109 """
1110 Apply masks to maps
1111 """
1113 if len(obj) == 0:
1114 LOGGER.debug(
1115 "Cannot apply mask to {} map since "
1116 "{} object not set. Ignoring...".format(obj_name, obj_name))
1117 return obj
1119 if len(mask) > 0:
1120 if len(mask) != len(obj):
1121 raise Exception(
1122 "The mask and the object {} that you are trying to mask "
1123 "do not have the same size!".format(obj_name))
1124 if 'weight' in obj_name:
1125 obj[np.logical_not(mask)] = 0.0
1126 else:
1127 obj[np.logical_not(mask)] \
1128 = hp.pixelfunc.UNSEEN
1129 LOGGER.debug("Applied mask to object {}".format(obj_name))
1130 else:
1131 LOGGER.debug("Using internal masks for masking")
1132 try:
1133 if trimming:
1134 if 'weight' in obj_name:
1135 obj[np.logical_not(self.trimmed_mask[bin])] = 0.0
1136 else:
1137 obj[np.logical_not(self.trimmed_mask[bin])] \
1138 = hp.pixelfunc.UNSEEN
1139 else:
1140 if 'weight' in obj_name:
1141 obj[np.logical_not(self.mask[bin])] = 0.0
1142 else:
1143 obj[np.logical_not(self.mask[bin])] \
1144 = hp.pixelfunc.UNSEEN
1145 except KeyError:
1146 LOGGER.debug(
1147 "No mask found to apply to {} map. Ignoring...".format(
1148 obj_name))
1149 return obj
1151 def _calc_shear_stats(self, statistics, plugins, outs, pols,
1152 mask=[], trimmed_mask=[],
1153 write_to_file=False, name='',
1154 output_dir='', defined_parameters={},
1155 undefined_parameters=[], map_keys=[0],
1156 use_shear_maps=False, cross_bins=[0, 1], bin=0,
1157 use_kappa_maps=False, trimming=False):
1159 if len(statistics) == 0:
1160 return outs
1162 for stat in statistics:
1163 LOGGER.debug("Calculating shear statistic {}".format(stat))
1164 if 'Full' in stat:
1165 if use_kappa_maps:
1166 for key in map_keys:
1167 self.convert_convergence_to_shear(
1168 bin=key, trimming=trimming)
1169 stat_out = plugins[stat](
1170 self.gamma_1, self.gamma_2,
1171 self.weights, self.ctx)
1172 elif 'Cross' in stat:
1173 if use_kappa_maps:
1174 for key in cross_bins:
1175 self.convert_convergence_to_shear(
1176 bin=key, trimming=trimming)
1177 stat_out = plugins[stat](
1178 self.gamma_1[cross_bins[0]], self.gamma_2[cross_bins[0]],
1179 self.gamma_1[cross_bins[1]], self.gamma_2[cross_bins[1]],
1180 self.weights[cross_bins[0]], self.weights[cross_bins[1]],
1181 self.ctx)
1182 else:
1183 if use_kappa_maps:
1184 self.convert_convergence_to_shear(
1185 bin=bin, trimming=trimming)
1186 stat_out = plugins[stat](
1187 self.gamma_1[bin], self.gamma_2[bin],
1188 self.weights[bin], self.ctx)
1190 if 'E' in pols:
1191 outs = self._stack_stats(
1192 stat_out[0], 'E', stat, [], outs,
1193 write_to_file, name, output_dir,
1194 defined_parameters, undefined_parameters)
1195 if 'B' in pols:
1196 outs = self._stack_stats(
1197 stat_out[1], 'B', stat, [], outs,
1198 write_to_file, name, output_dir,
1199 defined_parameters, undefined_parameters)
1200 return outs
1202 def _get_pol_stats(self, statistics):
1203 stats_ = copy.copy(statistics)
1204 stats = {}
1205 stats['E'] = copy.copy(stats_)
1206 stats['B'] = copy.copy(stats_)
1207 return stats
1209 def _calc_convergence_stats(self, outs, plugins, alm,
1210 trimmed_mask, mask,
1211 defined_parameters,
1212 undefined_parameters, output_dir,
1213 stats, pol,
1214 write_to_file, name, map_keys,
1215 cross_bins=[0, 1], bin=0, trimming=False):
1216 if len(stats) == 0:
1217 return outs
1219 full_mode = False
1220 cross_mode = False
1221 for stat in stats:
1222 if 'Full' in stat:
1223 full_mode = True
1224 elif 'Cross' in stat:
1225 cross_mode = True
1226 if full_mode:
1227 keys = map_keys
1228 elif cross_mode:
1229 keys = cross_bins
1230 else:
1231 keys = [bin]
1233 # get unsmoothed kappa maps
1234 kappa_unsmoothed = {}
1235 for key in keys:
1236 if pol == 'E':
1237 if key in self.kappa_E:
1238 # if kappa map exists use directly
1239 kappa_unsmoothed[key] = self.kappa_E[key]
1240 else:
1241 # convert alm to kappa
1242 kappa_unsmoothed[key] = self._convert_alm_to_kappa(
1243 alm[key], 0.0)
1244 else:
1245 if key in self.kappa_B:
1246 # if kappa map exists use directly
1247 kappa_unsmoothed[key] = self.kappa_B[key]
1248 else:
1249 # convert alm to kappa
1250 kappa_unsmoothed[key] = self._convert_alm_to_kappa(
1251 alm[key], 0.0)
1253 # masking
1254 if trimming:
1255 for key in keys:
1256 kappa_unsmoothed[key] = self._apply_mask(
1257 kappa_unsmoothed[key], mask=trimmed_mask[key],
1258 trimming=True,
1259 obj_name='convergence {}-modes instance {}'.format(
1260 pol, key))
1261 else:
1262 for key in keys:
1263 kappa_unsmoothed[key] = self._apply_mask(
1264 kappa_unsmoothed[key], mask=mask[key],
1265 trimming=False,
1266 obj_name='convergence {}-modes instance {}'.format(
1267 pol, key))
1269 for stat in stats:
1270 LOGGER.debug("Calculating statistic {}".format(stat))
1271 if 'Full' in stat:
1272 stat_out = plugins[stat](
1273 kappa_unsmoothed,
1274 self.weights, self.ctx)
1275 elif 'Cross' in stat:
1276 stat_out = plugins[stat](
1277 kappa_unsmoothed[cross_bins[0]],
1278 kappa_unsmoothed[cross_bins[1]],
1279 self.weights[cross_bins[0]],
1280 self.weights[cross_bins[1]], self.ctx)
1281 else:
1282 stat_out = plugins[stat](
1283 kappa_unsmoothed[bin],
1284 self.weights[bin], self.ctx)
1285 outs = self._stack_stats(
1286 stat_out, pol, stat, [], outs,
1287 write_to_file, name, output_dir,
1288 defined_parameters, undefined_parameters)
1289 return outs
1291 def _calc_cross_convergence_stats(self, outs, plugins, alm,
1292 trimmed_mask, mask,
1293 defined_parameters,
1294 undefined_parameters, output_dir,
1295 stats, pol,
1296 write_to_file, name,
1297 cross_bins=[0, 1], trimming=False):
1298 if len(stats) == 0:
1299 return outs
1301 kappa_cross = self._convert_alm_to_kappa(
1302 np.sqrt(alm[cross_bins[cross_bins[0]]])
1303 * np.sqrt(alm[cross_bins[cross_bins[1]]]), 0.0)
1305 # masking
1306 if trimming:
1307 for key in cross_bins:
1308 kappa_cross = self._apply_mask(
1309 kappa_cross, mask=trimmed_mask[key], trimming=True,
1310 obj_name='convergence {}-modes'.format(pol))
1311 else:
1312 for key in cross_bins:
1313 kappa_cross = self._apply_mask(
1314 kappa_cross, mask=mask[key], trimming=False,
1315 obj_name='convergence {}-modes'.format(pol))
1317 # take average of the weights and mask again
1318 weights = (self.weights[cross_bins[0]]
1319 + self.weights[cross_bins[1]]) / 2.
1320 if trimming:
1321 for key in cross_bins:
1322 weights = self._apply_mask(
1323 weights, mask=trimmed_mask[key], obj_name='weights',
1324 trimming=True)
1325 else:
1326 for key in cross_bins:
1327 weights = self._apply_mask(
1328 weights, mask=mask[key], obj_name='weights',
1329 trimming=False)
1331 for stat in stats:
1332 LOGGER.debug("Calculating statistic {}".format(stat))
1333 if 'Cross' in stat:
1334 stat_out = plugins[stat](
1335 kappa_cross, weights, self.ctx)
1336 else:
1337 raise Exception(
1338 "If the stat_type is convergence-cross the "
1339 "statistic must also be a cross statistic!")
1340 outs = self._stack_stats(
1341 stat_out, pol, stat, [], outs,
1342 write_to_file, name, output_dir,
1343 defined_parameters, undefined_parameters)
1344 return outs
1346 def _divide_statistics(self, stats):
1347 stat_types = {'E': {}, 'B': {}}
1348 for key in ['shear', 'convergence', 'convergence-cross', 'multi']:
1349 stats_ = np.asarray(
1350 list(self.ctx['stat_types'].values())) == key
1351 stats_ = np.asarray(
1352 list(self.ctx['stat_types'].keys()))[stats_]
1353 stat_types['E'][key] = []
1354 stat_types['B'][key] = []
1355 for stat in stats_:
1356 if stat in stats['E']:
1357 stat_types['E'][key].append(stat)
1358 if stat in stats['B']:
1359 stat_types['B'][key].append(stat)
1360 return stat_types
1362 def _prep_alms(self, trimmed_mask, polarizations,
1363 use_shear_maps, method='KS', map_keys=[0],
1364 cross_bins=[0, 1], bin=0,
1365 full_mode=False, cross_mode=False, trimming=False):
1367 # Do not touch
1368 sign_flip = True
1369 alm = {'E': {}, 'B': {}}
1370 if full_mode:
1371 keys = map_keys
1372 elif cross_mode:
1373 keys = cross_bins
1374 else:
1375 keys = [bin]
1376 if use_shear_maps:
1377 # calculate spherical harmonics
1378 LOGGER.debug(
1379 "Calculating spherical harmonics "
1380 "decomposition of shear maps")
1381 for key in keys:
1382 if sign_flip:
1383 self.gamma_1[key][self.gamma_1[key] > hp.UNSEEN] *= -1.
1384 a = utils._calc_alms(
1385 self.gamma_1[key], self.gamma_2[key],
1386 mode='A', method=method,
1387 lmax=self.lmax)
1388 alm['E'][key] = a[0]
1389 alm['B'][key] = a[1]
1390 if sign_flip:
1391 self.gamma_1[key][self.gamma_1[key] > hp.UNSEEN] *= -1.
1392 else:
1393 if 'E' in polarizations:
1394 for key in keys:
1395 alm['E'][key] = hp.map2alm(
1396 self.kappa_E[key], lmax=self.lmax)
1397 if 'B' in polarizations:
1398 for key in keys:
1399 alm['B'][key] = hp.map2alm(
1400 self.kappa_B[key], lmax=self.lmax)
1402 # trim weights
1403 if trimming:
1404 for key in keys:
1405 self.weights[key] = self._apply_mask(
1406 self.weights[key], mask=trimmed_mask[key],
1407 obj_name='weights', trimming=True)
1409 return alm
1411 def _calc_multi_stats(self,
1412 outs, scales, plugins, alm,
1413 trimmed_mask, mask,
1414 defined_parameters,
1415 undefined_parameters, output_dir,
1416 stats, pol,
1417 write_to_file, name, map_keys, statistics,
1418 cross_bins=[0, 1], bin=0, trimming=False):
1420 if len(stats) == 0:
1421 return outs
1423 full_mode = False
1424 cross_mode = False
1425 norm_mode = False
1426 for stat in stats:
1427 if 'Full' in stat:
1428 full_mode = True
1429 elif 'Cross' in stat:
1430 cross_mode = True
1431 else:
1432 norm_mode = True
1434 for scale in scales:
1435 LOGGER.debug("Hitting on scale {}. Smoothing...".format(scale))
1436 self.ctx['scale'] = scale
1438 if full_mode:
1439 kappas = {}
1440 for key in map_keys:
1441 kappas[key] = self._convert_alm_to_kappa(
1442 alm[key], scale)
1443 if trimming:
1444 kappas[key] = self._apply_mask(
1445 kappas[key], mask=trimmed_mask[key],
1446 obj_name='kappa', trimming=True)
1447 else:
1448 kappas[key] = self._apply_mask(
1449 kappas[key], mask=mask[key], obj_name='kappa',
1450 trimming=False)
1451 for stat in stats:
1452 if 'Full' in stat:
1453 stat_out = plugins[stat](
1454 kappas, self.weights, self.ctx)
1455 outs = self._stack_stats(
1456 stat_out, pol, stat, statistics, outs,
1457 False, name, output_dir,
1458 defined_parameters, undefined_parameters)
1460 if cross_mode:
1461 alm_cross = np.sqrt(alm[cross_bins[0]]) \
1462 * np.sqrt(alm[cross_bins[1]])
1463 kappa = self._convert_alm_to_kappa(
1464 alm_cross, scale)
1465 for key in cross_bins:
1466 if trimming:
1467 kappa = self._apply_mask(
1468 kappa, mask=trimmed_mask[key], obj_name='kappa',
1469 trimming=True)
1470 else:
1471 kappa = self._apply_mask(
1472 kappa, mask=mask[key], obj_name='kappa',
1473 trimming=False)
1474 weights = (self.weights[cross_bins[0]]
1475 + self.weights[cross_bins[1]]) / 2.
1476 if trimming:
1477 for key in cross_bins:
1478 weights = self._apply_mask(
1479 weights, mask=trimmed_mask[key],
1480 obj_name='weights', trimming=True)
1481 else:
1482 for key in cross_bins:
1483 weights = self._apply_mask(
1484 weights, mask=mask[key], obj_name='weights',
1485 trimming=False)
1486 for stat in stats:
1487 if 'Cross' in stat:
1488 stat_out = plugins[stat](
1489 kappa, weights, self.ctx)
1490 outs = self._stack_stats(
1491 stat_out, pol, stat, statistics, outs,
1492 False, name, output_dir,
1493 defined_parameters, undefined_parameters)
1495 if norm_mode:
1496 kappa = self._convert_alm_to_kappa(
1497 alm[bin], scale)
1498 if trimming:
1499 kappa = self._apply_mask(
1500 kappa, mask=trimmed_mask[bin], obj_name='kappa',
1501 trimming=True)
1502 else:
1503 kappa = self._apply_mask(
1504 kappa, mask=mask[bin], obj_name='kappa',
1505 trimming=False)
1506 for stat in stats:
1507 if ('Full' in stat) | ('Cross' in stat):
1508 continue
1509 else:
1510 stat_out = plugins[stat](
1511 kappa, self.weights[bin], self.ctx)
1512 outs = self._stack_stats(
1513 stat_out, pol, stat, statistics, outs,
1514 False, name, output_dir,
1515 defined_parameters, undefined_parameters)
1517 if write_to_file:
1518 # saving all scales at once
1519 for statistic in stats:
1520 output_path = paths.create_path(
1521 name,
1522 output_dir, {
1523 **defined_parameters,
1524 **{'mode': pol,
1525 'stat': statistic}},
1526 undefined_parameters,
1527 suffix='.npy')
1528 if os.path.exists(output_path):
1529 out_file = np.load(output_path)
1530 out_file = np.vstack(
1531 (out_file, outs[pol][statistic]))
1532 else:
1533 out_file = outs[pol][statistic]
1534 np.save(output_path, out_file)
1535 return {}
1536 return outs