AssembleCellCoaddTask#
- class lsst.drp.tasks.assemble_cell_coadd.AssembleCellCoaddTask(*args, **kwargs)#
Bases:
PipelineTaskAssemble a cell-based coadded image from a set of warps.
This task reads in the warp one at a time, and accumulates it in all the cells that it completely overlaps with. This is the optimal I/O pattern but this also implies that it is not possible to build one or only a few cells.
Each cell coadds is guaranteed to have a well-defined PSF. This is done by 1) excluding warps that only partially overlap a cell from that cell coadd; 2) interpolating bad pixels in the warps rather than excluding them; 3) by computing the coadd as a weighted mean of the warps without clipping; 4) by computing the coadd PSF as the weighted mean of the PSF of the warps
with the same weights.
The cells are (and must be) defined in the skymap, and cannot be configured or redefined here. The cells are assumed to be small enough that the PSF is assumed to be spatially constant within a cell.
Raises#
- NoWorkFound
Raised if no input warps are provided, or no cells could be populated.
- RuntimeError
Raised if the skymap is not cell-based.
Notes#
This is not yet a part of the standard DRP pipeline. As such, the Task and especially its Config and Connections are experimental and subject to change any time without a formal RFC or standard deprecation procedures until it is included in the DRP pipeline.
Methods Summary
run(*, inputs, skyInfo[, visitSummaryList])Run task algorithm on in-memory data.
runQuantum(butlerQC, inputRefs, outputRefs)Do butler IO and transform to provide in memory objects for tasks
runmethod.Methods Documentation
- run(*, inputs: dict[DataCoordinate, WarpInputs], skyInfo, visitSummaryList: list | None = None)#
Run task algorithm on in-memory data.
This method should be implemented in a subclass. This method will receive keyword-only arguments whose names will be the same as names of connection fields describing input dataset types. Argument values will be data objects retrieved from data butler. If a dataset type is configured with
multiplefield set toTruethen the argument value will be a list of objects, otherwise it will be a single object.If the task needs to know its input or output DataIds then it also has to override the
runQuantummethod.This method should return a
Structwhose attributes share the same name as the connection fields describing output dataset types.Parameters#
- **kwargs
Any Arbitrary parameters accepted by subclasses.
Returns#
- struct
Struct Struct with attribute names corresponding to output connection fields.
Examples#
Typical implementation of this method may look like:
def run(self, *, input, calib): # "input", "calib", and "output" are the names of the # connection fields. # Assuming that input/calib datasets are `scalar` they are # simple objects, do something with inputs and calibs, produce # output image. image = self.makeImage(input, calib) # If output dataset is `scalar` then return object, not list return Struct(output=image)
- **kwargs
- runQuantum(butlerQC, inputRefs, outputRefs)#
Do butler IO and transform to provide in memory objects for tasks
runmethod.Parameters#
- butlerQC
QuantumContext A butler which is specialized to operate in the context of a
lsst.daf.butler.Quantum.- inputRefs
InputQuantizedConnection Datastructure whose attribute names are the names that identify connections defined in corresponding
PipelineTaskConnectionsclass. The values of these attributes are thelsst.daf.butler.DatasetRefobjects associated with the defined input/prerequisite connections.- outputRefs
OutputQuantizedConnection Datastructure whose attribute names are the names that identify connections defined in corresponding
PipelineTaskConnectionsclass. The values of these attributes are thelsst.daf.butler.DatasetRefobjects associated with the defined output connections.
- butlerQC