ModelPsfMatchTask¶
-
class
lsst.ip.diffim.
ModelPsfMatchTask
(*args, **kwargs)¶ Bases:
lsst.ip.diffim.PsfMatchTask
Matching of two model Psfs, and application of the Psf-matching kernel to an input Exposure
Notes
This Task differs from ImagePsfMatchTask in that it matches two Psf _models_, by realizing them in an Exposure-sized SpatialCellSet and then inserting each Psf-image pair into KernelCandidates. Because none of the pairs of sources that are to be matched should be invalid, all sigma clipping is turned off in ModelPsfMatchConfig. And because there is no tracked _variance_ in the Psf images, the debugging and logging QA info should be interpreted with caution.
One item of note is that the sizes of Psf models are fixed (e.g. its defined as a 21x21 matrix). When the Psf-matching kernel is being solved for, the Psf “image” is convolved with each kernel basis function, leading to a loss of information around the borders. This pixel loss will be problematic for the numerical stability of the kernel solution if the size of the convolution kernel (set by ModelPsfMatchConfig.kernelSize) is much bigger than: psfSize//2. Thus the sizes of Psf-model matching kernels are typically smaller than their image-matching counterparts. If the size of the kernel is too small, the convolved stars will look “boxy”; if the kernel is too large, the kernel solution will be “noisy”. This is a trade-off that needs careful attention for a given dataset.
The primary use case for this Task is in matching an Exposure to a constant-across-the-sky Psf model for the purposes of image coaddition. It is important to note that in the code, the “template” Psf is the Psf that the science image gets matched to. In this sense the order of template and science image are reversed, compared to ImagePsfMatchTask, which operates on the template image.
Debug variables
The
lsst.pipe.base.CmdLineTask
command line task interface supports a flag -d/–debug to import debug.py from your PYTHONPATH. The relevant contents of debug.py for this Task include:import sys import lsstDebug def DebugInfo(name): di = lsstDebug.getInfo(name) if name == "lsst.ip.diffim.psfMatch": di.display = True # global di.maskTransparency = 80 # mask transparency di.displayCandidates = True # show all the candidates and residuals di.displayKernelBasis = False # show kernel basis functions di.displayKernelMosaic = True # show kernel realized across the image di.plotKernelSpatialModel = False # show coefficients of spatial model di.showBadCandidates = True # show the bad candidates (red) along with good (green) elif name == "lsst.ip.diffim.modelPsfMatch": di.display = True # global di.maskTransparency = 30 # mask transparency di.displaySpatialCells = True # show spatial cells before the fit return di lsstDebug.Info = DebugInfo lsstDebug.frame = 1
Note that if you want addional logging info, you may add to your scripts:
import lsst.log.utils as logUtils logUtils.traceSetAt("ip.diffim", 4)
Examples
A complete example of using ModelPsfMatchTask
This code is modelPsfMatchTask.py in the examples directory, and can be run as e.g.
examples/modelPsfMatchTask.py examples/modelPsfMatchTask.py --debug examples/modelPsfMatchTask.py --debug --template /path/to/templateExp.fits --science /path/to/scienceExp.fits
Create a subclass of ModelPsfMatchTask that accepts two exposures. Note that the “template” exposure contains the Psf that will get matched to, and the “science” exposure is the one that will be convolved:
class MyModelPsfMatchTask(ModelPsfMatchTask): def __init__(self, *args, **kwargs): ModelPsfMatchTask.__init__(self, *args, **kwargs) def run(self, templateExp, scienceExp): return ModelPsfMatchTask.run(self, scienceExp, templateExp.getPsf())
And allow the user the freedom to either run the script in default mode, or point to their own images on disk. Note that these images must be readable as an lsst.afw.image.Exposure:
if __name__ == "__main__": import argparse parser = argparse.ArgumentParser(description="Demonstrate the use of ModelPsfMatchTask") parser.add_argument("--debug", "-d", action="store_true", help="Load debug.py?", default=False) parser.add_argument("--template", "-t", help="Template Exposure to use", default=None) parser.add_argument("--science", "-s", help="Science Exposure to use", default=None) args = parser.parse_args()
We have enabled some minor display debugging in this script via the –debug option. However, if you have an lsstDebug debug.py in your PYTHONPATH you will get additional debugging displays. The following block checks for this script:
if args.debug: try: import debug # Since I am displaying 2 images here, set the starting frame number for the LSST debug LSST debug.lsstDebug.frame = 3 except ImportError as e: print(e, file=sys.stderr)
Finally, we call a run method that we define below. First set up a Config and modify some of the parameters. In particular we don’t want to “grow” the sizes of the kernel or KernelCandidates, since we are operating with fixed–size images (i.e. the size of the input Psf models).
def run(args): # # Create the Config and use sum of gaussian basis # config = ModelPsfMatchTask.ConfigClass() config.kernel.active.scaleByFwhm = False
Make sure the images (if any) that were sent to the script exist on disk and are readable. If no images are sent, make some fake data up for the sake of this example script (have a look at the code if you want more details on generateFakeData):
# Run the requested method of the Task if args.template is not None and args.science is not None: if not os.path.isfile(args.template): raise Exception("Template image %s does not exist" % (args.template)) if not os.path.isfile(args.science): raise Exception("Science image %s does not exist" % (args.science)) try: templateExp = afwImage.ExposureF(args.template) except Exception as e: raise Exception("Cannot read template image %s" % (args.template)) try: scienceExp = afwImage.ExposureF(args.science) except Exception as e: raise Exception("Cannot read science image %s" % (args.science)) else: templateExp, scienceExp = generateFakeData() config.kernel.active.sizeCellX = 128 config.kernel.active.sizeCellY = 128
if args.debug: afwDisplay.Display(frame=1).mtv(templateExp, title="Example script: Input Template") afwDisplay.Display(frame=2).mtv(scienceExp, title="Example script: Input Science Image")
Create and run the Task:
# Create the Task psfMatchTask = MyModelPsfMatchTask(config=config) # Run the Task result = psfMatchTask.run(templateExp, scienceExp)
And finally provide optional debugging display of the Psf-matched (via the Psf models) science image:
if args.debug: # See if the LSST debug has incremented the frame number; if not start with frame 3 try: frame = debug.lsstDebug.frame + 1 except Exception: frame = 3 afwDisplay.Display(frame=frame).mtv(result.psfMatchedExposure, title="Example script: Matched Science Image")
Methods Summary
emptyMetadata
()Empty (clear) the metadata for this Task and all sub-Tasks. getAllSchemaCatalogs
()Get schema catalogs for all tasks in the hierarchy, combining the results into a single dict. getFullMetadata
()Get metadata for all tasks. getFullName
()Get the task name as a hierarchical name including parent task names. getName
()Get the name of the task. getSchemaCatalogs
()Get the schemas generated by this task. getTaskDict
()Get a dictionary of all tasks as a shallow copy. makeField
(doc)Make a lsst.pex.config.ConfigurableField
for this task.makeSubtask
(name, **keyArgs)Create a subtask as a new instance as the name
attribute of this task.run
(exposure, referencePsfModel[, kernelSum])Psf-match an exposure to a model Psf timer
(name[, logLevel])Context manager to log performance data for an arbitrary block of code. Methods Documentation
-
emptyMetadata
()¶ Empty (clear) the metadata for this Task and all sub-Tasks.
-
getAllSchemaCatalogs
()¶ Get schema catalogs for all tasks in the hierarchy, combining the results into a single dict.
Returns: - schemacatalogs :
dict
Keys are butler dataset type, values are a empty catalog (an instance of the appropriate lsst.afw.table Catalog type) for all tasks in the hierarchy, from the top-level task down through all subtasks.
Notes
This method may be called on any task in the hierarchy; it will return the same answer, regardless.
The default implementation should always suffice. If your subtask uses schemas the override
Task.getSchemaCatalogs
, not this method.- schemacatalogs :
-
getFullMetadata
()¶ Get metadata for all tasks.
Returns: - metadata :
lsst.daf.base.PropertySet
The
PropertySet
keys are the full task name. Values are metadata for the top-level task and all subtasks, sub-subtasks, etc..
Notes
The returned metadata includes timing information (if
@timer.timeMethod
is used) and any metadata set by the task. The name of each item consists of the full task name with.
replaced by:
, followed by.
and the name of the item, e.g.:topLevelTaskName:subtaskName:subsubtaskName.itemName
using
:
in the full task name disambiguates the rare situation that a task has a subtask and a metadata item with the same name.- metadata :
-
getFullName
()¶ Get the task name as a hierarchical name including parent task names.
Returns: - fullName :
str
The full name consists of the name of the parent task and each subtask separated by periods. For example:
- The full name of top-level task “top” is simply “top”.
- The full name of subtask “sub” of top-level task “top” is “top.sub”.
- The full name of subtask “sub2” of subtask “sub” of top-level task “top” is “top.sub.sub2”.
- fullName :
-
getSchemaCatalogs
()¶ Get the schemas generated by this task.
Returns: - schemaCatalogs :
dict
Keys are butler dataset type, values are an empty catalog (an instance of the appropriate
lsst.afw.table
Catalog type) for this task.
See also
Task.getAllSchemaCatalogs
Notes
Warning
Subclasses that use schemas must override this method. The default implemenation returns an empty dict.
This method may be called at any time after the Task is constructed, which means that all task schemas should be computed at construction time, not when data is actually processed. This reflects the philosophy that the schema should not depend on the data.
Returning catalogs rather than just schemas allows us to save e.g. slots for SourceCatalog as well.
- schemaCatalogs :
-
getTaskDict
()¶ Get a dictionary of all tasks as a shallow copy.
Returns: - taskDict :
dict
Dictionary containing full task name: task object for the top-level task and all subtasks, sub-subtasks, etc..
- taskDict :
-
classmethod
makeField
(doc)¶ Make a
lsst.pex.config.ConfigurableField
for this task.Parameters: - doc :
str
Help text for the field.
Returns: - configurableField :
lsst.pex.config.ConfigurableField
A
ConfigurableField
for this task.
Examples
Provides a convenient way to specify this task is a subtask of another task.
Here is an example of use:
class OtherTaskConfig(lsst.pex.config.Config) aSubtask = ATaskClass.makeField("a brief description of what this task does")
- doc :
-
makeSubtask
(name, **keyArgs)¶ Create a subtask as a new instance as the
name
attribute of this task.Parameters: - name :
str
Brief name of the subtask.
- keyArgs
Extra keyword arguments used to construct the task. The following arguments are automatically provided and cannot be overridden:
- “config”.
- “parentTask”.
Notes
The subtask must be defined by
Task.config.name
, an instance of pex_config ConfigurableField or RegistryField.- name :
-
run
(exposure, referencePsfModel, kernelSum=1.0)¶ Psf-match an exposure to a model Psf
Parameters: - exposure :
lsst.afw.image.Exposure
Exposure to Psf-match to the reference Psf model; it must return a valid PSF model via exposure.getPsf()
- referencePsfModel :
lsst.afw.detection.Psf
The Psf model to match to
- kernelSum :
float
, optional A multipicative factor to apply to the kernel sum (default=1.0)
Returns: - result :
struct
psfMatchedExposure
: the Psf-matched Exposure.- This has the same parent bbox, Wcs, Calib and Filter as the input Exposure but no Psf. In theory the Psf should equal referencePsfModel but the match is likely not exact.
psfMatchingKernel
: the spatially varying Psf-matching kernelkernelCellSet
: SpatialCellSet used to solve for the Psf-matching kernelreferencePsfModel
: Validated and/or modified reference model used
Raises: - RuntimeError
if the Exposure does not contain a Psf model
- exposure :
-
timer
(name, logLevel=10000)¶ Context manager to log performance data for an arbitrary block of code.
Parameters: - name :
str
Name of code being timed; data will be logged using item name:
Start
andEnd
.- logLevel
A
lsst.log
level constant.
See also
timer.logInfo
Examples
Creating a timer context:
with self.timer("someCodeToTime"): pass # code to time
- name :
-