DecorrelateALKernelMapper¶
-
class
lsst.ip.diffim.
DecorrelateALKernelMapper
(*args, **kwargs)¶ Bases:
lsst.ip.diffim.DecorrelateALKernelTask
,lsst.ip.diffim.ImageMapper
Task to be used as an ImageMapper for performing A&L decorrelation on subimages on a grid across a A&L difference image.
This task subclasses DecorrelateALKernelTask in order to implement all of that task’s configuration parameters, as well as its
run
method.Methods Summary
calculateVariancePlane
(vplane1, vplane2, …)Full propagation of the variance planes of the original exposures. computeCommonShape
(*shapes)Calculate the common shape for FFT operations. computeCorrectedDiffimPsf
(corrft, psfOld)Compute the (decorrelated) difference image’s new PSF. computeCorrectedImage
(corrft, imgOld)Compute the decorrelated difference image. computeDiffimCorrection
(kappa, svar, tvar)Compute the Lupton decorrelation post-convolution kernel for decorrelating an image difference, based on the PSF-matching kernel. computeScoreCorrection
(kappa, svar, tvar, …)Compute the correction kernel for a score image. computeVarianceMean
(exposure)emptyMetadata
()Empty (clear) the metadata for this Task and all sub-Tasks. estimateVariancePlane
(vplane1, vplane2, …)Estimate the variance planes. getAllSchemaCatalogs
()Get schema catalogs for all tasks in the hierarchy, combining the results into a single dict. getFullMetadata
()Get metadata for all tasks. getFullName
()Get the task name as a hierarchical name including parent task names. getName
()Get the name of the task. getSchemaCatalogs
()Get the schemas generated by this task. getTaskDict
()Get a dictionary of all tasks as a shallow copy. makeField
(doc)Make a lsst.pex.config.ConfigurableField
for this task.makeSubtask
(name, **keyArgs)Create a subtask as a new instance as the name
attribute of this task.padCenterOriginArray
(A, newShape[, useInverse])Zero pad an image where the origin is at the center and replace the origin to the corner as required by the periodic input of FFT. run
(subExposure, expandedSubExposure, …[, …])Perform decorrelation operation on subExposure
, usingexpandedSubExposure
to allow for invalid edge pixels arising from convolutions.timer
(name, logLevel)Context manager to log performance data for an arbitrary block of code. Methods Documentation
-
calculateVariancePlane
(vplane1, vplane2, varMean1, varMean2, c1ft, c2ft)¶ Full propagation of the variance planes of the original exposures.
The original variance planes of independent pixels are convolved with the image space square of the overall kernels.
Parameters: - vplane1, vplane2 :
numpy.ndarray
offloat
Variance planes of the original (before pre-convolution or matching) exposures.
- varMean1, varMean2 :
float
Replacement average values for non-finite
vplane1
andvplane2
values respectively.- c1ft, c2ft :
numpy.ndarray
ofcomplex
The overall convolution that includes the matching and the afterburner in frequency space. The result of either
computeScoreCorrection
orcomputeDiffimCorrection
.
Returns: - vplaneD :
numpy.ndarray
offloat
The variance plane of the difference/score images.
Notes
See DMTN-179 Section 5 about the variance plane calculations.
Infs and NaNs are allowed and kept in the returned array.
- vplane1, vplane2 :
-
computeCommonShape
(*shapes)¶ Calculate the common shape for FFT operations. Set
self.freqSpaceShape
internally.Parameters: Returns: - None.
Notes
For each dimension, gets the smallest even number greater than or equal to
N1+N2-1
whereN1
andN2
are the two largest values. In case of only one shape given, rounds up to even each dimension value.
-
computeCorrectedDiffimPsf
(corrft, psfOld)¶ Compute the (decorrelated) difference image’s new PSF.
Parameters: - corrft :
numpy.ndarray
The frequency space representation of the correction calculated by
computeCorrection
. Shape must beself.freqSpaceShape
.- psfOld :
numpy.ndarray
The psf of the difference image to be corrected.
Returns: - psfNew :
numpy.ndarray
The corrected psf, same shape as
psfOld
, sum normed to 1.
Notes
There is no algorithmic guarantee that the corrected psf can meaningfully fit to the same size as the original one.
- corrft :
-
computeCorrectedImage
(corrft, imgOld)¶ Compute the decorrelated difference image.
Parameters: - corrft :
numpy.ndarray
The frequency space representation of the correction calculated by
computeCorrection
. Shape must beself.freqSpaceShape
.- imgOld :
numpy.ndarray
The difference image to be corrected.
Returns: - imgNew :
numpy.ndarray
The corrected image, same size as the input.
- corrft :
-
computeDiffimCorrection
(kappa, svar, tvar)¶ Compute the Lupton decorrelation post-convolution kernel for decorrelating an image difference, based on the PSF-matching kernel.
Parameters: - kappa :
numpy.ndarray
offloat
A matching kernel 2-d numpy.array derived from Alard & Lupton PSF matching.
- svar :
float
> 0. Average variance of science image used for PSF matching.
- tvar :
float
> 0. Average variance of the template (matched) image used for PSF matching.
Returns: - corrft :
numpy.ndarray
offloat
The frequency space representation of the correction. The array is real (dtype float). Shape is
self.freqSpaceShape
.- cnft, crft :
numpy.ndarray
ofcomplex
The overall convolution (pre-conv, PSF matching, noise correction) kernel for the science and template images, respectively for the variance plane calculations. These are intermediate results in frequency space.
Notes
The maximum correction factor converges to
sqrt(tvar/svar)
towards high frequencies. This should be a plausible value.- kappa :
-
computeScoreCorrection
(kappa, svar, tvar, preConvArr)¶ Compute the correction kernel for a score image.
Parameters: - kappa :
numpy.ndarray
A matching kernel 2-d numpy.array derived from Alard & Lupton PSF matching.
- svar :
float
Average variance of science image used for PSF matching (before pre-convolution).
- tvar :
float
Average variance of the template (matched) image used for PSF matching.
- preConvArr :
numpy.ndarray
The pre-convolution kernel of the science image. It should be the PSF of the science image or an approximation of it. It must be normed to sum 1.
Returns: - corrft :
numpy.ndarray
offloat
The frequency space representation of the correction. The array is real (dtype float). Shape is
self.freqSpaceShape
.- cnft, crft :
numpy.ndarray
ofcomplex
The overall convolution (pre-conv, PSF matching, noise correction) kernel for the science and template images, respectively for the variance plane calculations. These are intermediate results in frequency space.
Notes
To be precise, the science image should be _correlated_ by
preConvArray
but this does not matter for this calculation.cnft
,crft
contain the scaling factor as well.- kappa :
-
computeVarianceMean
(exposure)¶
-
emptyMetadata
() → None¶ Empty (clear) the metadata for this Task and all sub-Tasks.
-
static
estimateVariancePlane
(vplane1, vplane2, c1ft, c2ft)¶ Estimate the variance planes.
The estimation assumes that around each pixel the surrounding pixels’ sigmas within the convolution kernel are the same.
Parameters: - vplane1, vplane2 :
numpy.ndarray
offloat
Variance planes of the original (before pre-convolution or matching) exposures.
- c1ft, c2ft :
numpy.ndarray
ofcomplex
The overall convolution that includes the matching and the afterburner in frequency space. The result of either
computeScoreCorrection
orcomputeDiffimCorrection
.
Returns: - vplaneD :
numpy.ndarray
offloat
The estimated variance plane of the difference/score image as a weighted sum of the input variances.
Notes
See DMTN-179 Section 5 about the variance plane calculations.
- vplane1, vplane2 :
-
getAllSchemaCatalogs
() → Dict[str, Any]¶ Get schema catalogs for all tasks in the hierarchy, combining the results into a single dict.
Returns: - schemacatalogs :
dict
Keys are butler dataset type, values are a empty catalog (an instance of the appropriate
lsst.afw.table
Catalog type) for all tasks in the hierarchy, from the top-level task down through all subtasks.
Notes
This method may be called on any task in the hierarchy; it will return the same answer, regardless.
The default implementation should always suffice. If your subtask uses schemas the override
Task.getSchemaCatalogs
, not this method.- schemacatalogs :
-
getFullMetadata
() → lsst.pipe.base._task_metadata.TaskMetadata¶ Get metadata for all tasks.
Returns: - metadata :
TaskMetadata
The keys are the full task name. Values are metadata for the top-level task and all subtasks, sub-subtasks, etc.
Notes
The returned metadata includes timing information (if
@timer.timeMethod
is used) and any metadata set by the task. The name of each item consists of the full task name with.
replaced by:
, followed by.
and the name of the item, e.g.:topLevelTaskName:subtaskName:subsubtaskName.itemName
using
:
in the full task name disambiguates the rare situation that a task has a subtask and a metadata item with the same name.- metadata :
-
getFullName
() → str¶ Get the task name as a hierarchical name including parent task names.
Returns: - fullName :
str
The full name consists of the name of the parent task and each subtask separated by periods. For example:
- The full name of top-level task “top” is simply “top”.
- The full name of subtask “sub” of top-level task “top” is “top.sub”.
- The full name of subtask “sub2” of subtask “sub” of top-level task “top” is “top.sub.sub2”.
- fullName :
-
getSchemaCatalogs
() → Dict[str, Any]¶ Get the schemas generated by this task.
Returns: - schemaCatalogs :
dict
Keys are butler dataset type, values are an empty catalog (an instance of the appropriate
lsst.afw.table
Catalog type) for this task.
See also
Task.getAllSchemaCatalogs
Notes
Warning
Subclasses that use schemas must override this method. The default implementation returns an empty dict.
This method may be called at any time after the Task is constructed, which means that all task schemas should be computed at construction time, not when data is actually processed. This reflects the philosophy that the schema should not depend on the data.
Returning catalogs rather than just schemas allows us to save e.g. slots for SourceCatalog as well.
- schemaCatalogs :
-
getTaskDict
() → Dict[str, weakref.ReferenceType[Task]]¶ Get a dictionary of all tasks as a shallow copy.
Returns: - taskDict :
dict
Dictionary containing full task name: task object for the top-level task and all subtasks, sub-subtasks, etc.
- taskDict :
-
classmethod
makeField
(doc: str) → lsst.pex.config.configurableField.ConfigurableField¶ Make a
lsst.pex.config.ConfigurableField
for this task.Parameters: - doc :
str
Help text for the field.
Returns: - configurableField :
lsst.pex.config.ConfigurableField
A
ConfigurableField
for this task.
Examples
Provides a convenient way to specify this task is a subtask of another task.
Here is an example of use:
class OtherTaskConfig(lsst.pex.config.Config): aSubtask = ATaskClass.makeField("brief description of task")
- doc :
-
makeSubtask
(name: str, **keyArgs) → None¶ Create a subtask as a new instance as the
name
attribute of this task.Parameters: - name :
str
Brief name of the subtask.
- keyArgs
Extra keyword arguments used to construct the task. The following arguments are automatically provided and cannot be overridden:
- “config”.
- “parentTask”.
Notes
The subtask must be defined by
Task.config.name
, an instance ofConfigurableField
orRegistryField
.- name :
-
static
padCenterOriginArray
(A, newShape: tuple, useInverse=False)¶ Zero pad an image where the origin is at the center and replace the origin to the corner as required by the periodic input of FFT. Implement also the inverse operation, crop the padding and re-center data.
Parameters: - A :
numpy.ndarray
An array to copy from.
- newShape :
tuple
ofint
The dimensions of the resulting array. For padding, the resulting array must be larger than A in each dimension. For the inverse operation this must be the original, before padding size of the array.
- useInverse : bool, optional
Selector of forward, add padding, operation (False) or its inverse, crop padding, operation (True).
Returns: - R :
numpy.ndarray
The padded or unpadded array with shape of
newShape
and the same dtype as A.
Notes
For odd dimensions, the splitting is rounded to put the center pixel into the new corner origin (0,0). This is to be consistent e.g. for a dirac delta kernel that is originally located at the center pixel.
- A :
-
run
(subExposure, expandedSubExposure, fullBBox, template, science, alTaskResult=None, psfMatchingKernel=None, preConvKernel=None, **kwargs)¶ Perform decorrelation operation on
subExposure
, usingexpandedSubExposure
to allow for invalid edge pixels arising from convolutions.This method performs A&L decorrelation on
subExposure
using local measures for image variances and PSF.subExposure
is a sub-exposure of the non-decorrelated A&L diffim. It also requires the corresponding sub-exposures of the template (template
) and science (science
) exposures.Parameters: - subExposure :
lsst.afw.image.Exposure
the sub-exposure of the diffim
- expandedSubExposure :
lsst.afw.image.Exposure
the expanded sub-exposure upon which to operate
- fullBBox :
lsst.geom.Box2I
the bounding box of the original exposure
- template :
lsst.afw.image.Exposure
the corresponding sub-exposure of the template exposure
- science :
lsst.afw.image.Exposure
the corresponding sub-exposure of the science exposure
- alTaskResult :
lsst.pipe.base.Struct
the result of A&L image differencing on
science
andtemplate
, importantly containing the resultingpsfMatchingKernel
. Can beNone
, only ifpsfMatchingKernel
is notNone
.- psfMatchingKernel : Alternative parameter for passing the
A&L
psfMatchingKernel
directly.- preConvKernel : If not None, then pre-filtering was applied
to science exposure, and this is the pre-convolution kernel.
- kwargs :
additional keyword arguments propagated from
ImageMapReduceTask.run
.
Returns: - A `pipeBase.Struct` containing:
subExposure
: the result of thesubExposure
processing.decorrelationKernel
: the decorrelation kernel, currently- not used.
Notes
This
run
method accepts parameters identical to those ofImageMapper.run
, since it is called from theImageMapperTask
. See that class for more information.- subExposure :
-
timer
(name: str, logLevel: int = 10) → Iterator[None]¶ Context manager to log performance data for an arbitrary block of code.
Parameters: See also
timer.logInfo
Examples
Creating a timer context:
with self.timer("someCodeToTime"): pass # code to time
-