PipelineTask#

class lsst.pipe.base.PipelineTask(*, config: PipelineTaskConfig | None = None, log: logging.Logger | LsstLogAdapter | None = None, initInputs: dict[str, Any] | None = None, **kwargs: Any)#

Bases: Task

Base class for all pipeline tasks.

This is an abstract base class for PipelineTasks which represents an algorithm executed by framework(s) on data which comes from data butler, resulting data is also stored in a data butler.

PipelineTask inherits from a Task and uses the same configuration mechanism based on lsst.pex.config. PipelineTask classes also have a PipelineTaskConnections class associated with their config which defines all of the IO a PipelineTask will need to do. PipelineTask sub-class typically implements run() method which receives Python-domain data objects and returns lsst.pipe.base.Struct object with resulting data. run() method is not supposed to perform any I/O, it operates entirely on in-memory objects. runQuantum() is the method (can be re-implemented in sub-class) where all necessary I/O is performed, it reads all input data from data butler into memory, calls run() method with that data, examines returned Struct object and saves some or all of that data back to data butler. runQuantum() method receives a QuantumContext instance to facilitate I/O, a InputQuantizedConnection instance which defines all input lsst.daf.butler.DatasetRef, and a OutputQuantizedConnection instance which defines all the output lsst.daf.butler.DatasetRef for a single invocation of PipelineTask.

Subclasses must be constructable with exactly the arguments taken by the PipelineTask base class constructor, but may support other signatures as well.

Attributes#

canMultiprocessbool, True by default (class attribute)

This class attribute is checked by execution framework, sub-classes can set it to False in case task does not support multiprocessing.

Parameters#

configConfig, optional

Configuration for this task (an instance of self.ConfigClass, which is a task-specific subclass of PipelineTaskConfig). If not specified then it defaults to self.ConfigClass().

loglogging.Logger, optional

Logger instance whose name is used as a log name prefix, or None for no prefix.

initInputsdict, optional

A dictionary of objects needed to construct this PipelineTask, with keys matching the keys of the dictionary returned by getInitInputDatasetTypes and values equivalent to what would be obtained by calling get with those DatasetTypes and no data IDs. While it is optional for the base class, subclasses are permitted to require this argument.

**kwargsAny

Arbitrary parameters, passed to base class constructor.

Attributes Summary

Methods Summary

run(**kwargs)

Run task algorithm on in-memory data.

runQuantum(butlerQC, inputRefs, outputRefs)

Do butler IO and transform to provide in memory objects for tasks run method.

Attributes Documentation

canMultiprocess: ClassVar[bool] = True#

Methods Documentation

run(**kwargs: Any) Struct#

Run task algorithm on in-memory data.

This method should be implemented in a subclass. This method will receive keyword-only arguments whose names will be the same as names of connection fields describing input dataset types. Argument values will be data objects retrieved from data butler. If a dataset type is configured with multiple field set to True then the argument value will be a list of objects, otherwise it will be a single object.

If the task needs to know its input or output DataIds then it also has to override the runQuantum method.

This method should return a Struct whose attributes share the same name as the connection fields describing output dataset types.

Parameters#

**kwargsAny

Arbitrary parameters accepted by subclasses.

Returns#

structStruct

Struct with attribute names corresponding to output connection fields.

Examples#

Typical implementation of this method may look like:

def run(self, *, input, calib):
    # "input", "calib", and "output" are the names of the
    # connection fields.

    # Assuming that input/calib datasets are `scalar` they are
    # simple objects, do something with inputs and calibs, produce
    # output image.
    image = self.makeImage(input, calib)

    # If output dataset is `scalar` then return object, not list
    return Struct(output=image)
runQuantum(butlerQC: QuantumContext, inputRefs: InputQuantizedConnection, outputRefs: OutputQuantizedConnection) None#

Do butler IO and transform to provide in memory objects for tasks run method.

Parameters#

butlerQCQuantumContext

A butler which is specialized to operate in the context of a lsst.daf.butler.Quantum.

inputRefsInputQuantizedConnection

Datastructure whose attribute names are the names that identify connections defined in corresponding PipelineTaskConnections class. The values of these attributes are the lsst.daf.butler.DatasetRef objects associated with the defined input/prerequisite connections.

outputRefsOutputQuantizedConnection

Datastructure whose attribute names are the names that identify connections defined in corresponding PipelineTaskConnections class. The values of these attributes are the lsst.daf.butler.DatasetRef objects associated with the defined output connections.