PipelineTask

class lsst.pipe.base.PipelineTask(*, config=None, log=None, initInputs=None, **kwargs)

Bases: lsst.pipe.base.Task

Base class for all pipeline tasks.

This is an abstract base class for PipelineTasks which represents an algorithm executed by framework(s) on data which comes from data butler, resulting data is also stored in a data butler.

PipelineTask inherits from a pipe.base.Task and uses the same configuration mechanism based on pex.config. PipelineTask`s also have a `PipelineTaskConnections class associated with their config which defines all of the IO a PipelineTask will need to do. PipelineTask sub-class typically implements run() method which receives Python-domain data objects and returns pipe.base.Struct object with resulting data. run() method is not supposed to perform any I/O, it operates entirely on in-memory objects. runQuantum() is the method (can be re-implemented in sub-class) where all necessary I/O is performed, it reads all input data from data butler into memory, calls run() method with that data, examines returned Struct object and saves some or all of that data back to data butler. runQuantum() method receives a ButlerQuantumContext instance to facilitate I/O, a InputQuantizedConnection instance which defines all input lsst.daf.butler.DatasetRef`s, and a `OutputQuantizedConnection instance which defines all the output `lsst.daf.butler.DatasetRef`s for a single invocation of PipelineTask.

Subclasses must be constructable with exactly the arguments taken by the PipelineTask base class constructor, but may support other signatures as well.

Parameters
configpex.config.Config, optional

Configuration for this task (an instance of self.ConfigClass, which is a task-specific subclass of PipelineTaskConfig). If not specified then it defaults to self.ConfigClass().

loglsst.log.Log, optional

Logger instance whose name is used as a log name prefix, or None for no prefix.

initInputsdict, optional

A dictionary of objects needed to construct this PipelineTask, with keys matching the keys of the dictionary returned by getInitInputDatasetTypes and values equivalent to what would be obtained by calling Butler.get with those DatasetTypes and no data IDs. While it is optional for the base class, subclasses are permitted to require this argument.

Attributes
canMultiprocessbool, True by default (class attribute)

This class attribute is checked by execution framework, sub-classes can set it to False in case task does not support multiprocessing.

Attributes Summary

canMultiprocess

Methods Summary

emptyMetadata()

Empty (clear) the metadata for this Task and all sub-Tasks.

getAllSchemaCatalogs()

Get schema catalogs for all tasks in the hierarchy, combining the results into a single dict.

getFullMetadata()

Get metadata for all tasks.

getFullName()

Get the task name as a hierarchical name including parent task names.

getName()

Get the name of the task.

getResourceConfig()

Return resource configuration for this task.

getSchemaCatalogs()

Get the schemas generated by this task.

getTaskDict()

Get a dictionary of all tasks as a shallow copy.

makeField(doc)

Make a lsst.pex.config.ConfigurableField for this task.

makeSubtask(name, **keyArgs)

Create a subtask as a new instance as the name attribute of this task.

run(**kwargs)

Run task algorithm on in-memory data.

runQuantum(butlerQC, inputRefs, outputRefs)

Method to do butler IO and or transforms to provide in memory objects for tasks run method

timer(name[, logLevel])

Context manager to log performance data for an arbitrary block of code.

Attributes Documentation

canMultiprocess = True

Methods Documentation

emptyMetadata()

Empty (clear) the metadata for this Task and all sub-Tasks.

getAllSchemaCatalogs()

Get schema catalogs for all tasks in the hierarchy, combining the results into a single dict.

Returns
schemacatalogsdict

Keys are butler dataset type, values are a empty catalog (an instance of the appropriate lsst.afw.table Catalog type) for all tasks in the hierarchy, from the top-level task down through all subtasks.

Notes

This method may be called on any task in the hierarchy; it will return the same answer, regardless.

The default implementation should always suffice. If your subtask uses schemas the override Task.getSchemaCatalogs, not this method.

getFullMetadata()

Get metadata for all tasks.

Returns
metadatalsst.daf.base.PropertySet

The PropertySet keys are the full task name. Values are metadata for the top-level task and all subtasks, sub-subtasks, etc..

Notes

The returned metadata includes timing information (if @timer.timeMethod is used) and any metadata set by the task. The name of each item consists of the full task name with . replaced by :, followed by . and the name of the item, e.g.:

topLevelTaskName:subtaskName:subsubtaskName.itemName

using : in the full task name disambiguates the rare situation that a task has a subtask and a metadata item with the same name.

getFullName()

Get the task name as a hierarchical name including parent task names.

Returns
fullNamestr

The full name consists of the name of the parent task and each subtask separated by periods. For example:

  • The full name of top-level task “top” is simply “top”.

  • The full name of subtask “sub” of top-level task “top” is “top.sub”.

  • The full name of subtask “sub2” of subtask “sub” of top-level task “top” is “top.sub.sub2”.

getName()

Get the name of the task.

Returns
taskNamestr

Name of the task.

See also

getFullName

getResourceConfig()

Return resource configuration for this task.

Returns
Object of type `~config.ResourceConfig` or ``None`` if resource
configuration is not defined for this task.
getSchemaCatalogs()

Get the schemas generated by this task.

Returns
schemaCatalogsdict

Keys are butler dataset type, values are an empty catalog (an instance of the appropriate lsst.afw.table Catalog type) for this task.

Notes

Warning

Subclasses that use schemas must override this method. The default implemenation returns an empty dict.

This method may be called at any time after the Task is constructed, which means that all task schemas should be computed at construction time, not when data is actually processed. This reflects the philosophy that the schema should not depend on the data.

Returning catalogs rather than just schemas allows us to save e.g. slots for SourceCatalog as well.

getTaskDict()

Get a dictionary of all tasks as a shallow copy.

Returns
taskDictdict

Dictionary containing full task name: task object for the top-level task and all subtasks, sub-subtasks, etc..

classmethod makeField(doc)

Make a lsst.pex.config.ConfigurableField for this task.

Parameters
docstr

Help text for the field.

Returns
configurableFieldlsst.pex.config.ConfigurableField

A ConfigurableField for this task.

Examples

Provides a convenient way to specify this task is a subtask of another task.

Here is an example of use:

class OtherTaskConfig(lsst.pex.config.Config)
    aSubtask = ATaskClass.makeField("a brief description of what this task does")
makeSubtask(name, **keyArgs)

Create a subtask as a new instance as the name attribute of this task.

Parameters
namestr

Brief name of the subtask.

keyArgs

Extra keyword arguments used to construct the task. The following arguments are automatically provided and cannot be overridden:

  • “config”.

  • “parentTask”.

Notes

The subtask must be defined by Task.config.name, an instance of pex_config ConfigurableField or RegistryField.

run(**kwargs)

Run task algorithm on in-memory data.

This method should be implemented in a subclass. This method will receive keyword arguments whose names will be the same as names of connection fields describing input dataset types. Argument values will be data objects retrieved from data butler. If a dataset type is configured with multiple field set to True then the argument value will be a list of objects, otherwise it will be a single object.

If the task needs to know its input or output DataIds then it has to override runQuantum method instead.

This method should return a Struct whose attributes share the same name as the connection fields describing output dataset types.

Returns
structStruct

Struct with attribute names corresponding to output connection fields

Examples

Typical implementation of this method may look like:

def run(self, input, calib):
    # "input", "calib", and "output" are the names of the config fields

    # Assuming that input/calib datasets are `scalar` they are simple objects,
    # do something with inputs and calibs, produce output image.
    image = self.makeImage(input, calib)

    # If output dataset is `scalar` then return object, not list
    return Struct(output=image)
runQuantum(butlerQC: lsst.pipe.base.butlerQuantumContext.ButlerQuantumContext, inputRefs: lsst.pipe.base.connections.InputQuantizedConnection, outputRefs: lsst.pipe.base.connections.OutputQuantizedConnection)

Method to do butler IO and or transforms to provide in memory objects for tasks run method

Parameters
butlerQCButlerQuantumContext

A butler which is specialized to operate in the context of a lsst.daf.butler.Quantum.

inputRefsInputQuantizedConnection

Datastructure whose attribute names are the names that identify connections defined in corresponding PipelineTaskConnections class. The values of these attributes are the `lsst.daf.butler.DatasetRef`s associated with the defined input/prerequisite connections.

outputRefsOutputQuantizedConnection

Datastructure whose attribute names are the names that identify connections defined in corresponding PipelineTaskConnections class. The values of these attributes are the `lsst.daf.butler.DatasetRef`s associated with the defined output connections.

timer(name, logLevel=10000)

Context manager to log performance data for an arbitrary block of code.

Parameters
namestr

Name of code being timed; data will be logged using item name: Start and End.

logLevel

A lsst.log level constant.

See also

timer.logInfo

Examples

Creating a timer context:

with self.timer("someCodeToTime"):
    pass  # code to time