ConsolidateResourceUsageTask#

class lsst.analysis.tools.tasks.ConsolidateResourceUsageTask(*, config: PipelineTaskConfig | None = None, log: logging.Logger | LsstLogAdapter | None = None, initInputs: dict[str, Any] | None = None, **kwargs: Any)#

Bases: PipelineTask

A PipelineTask that summarizes task resource usage into a single table with per-task rows.

Notes#

This is an unusual PipelineTask in that its input connection has dynamic dimensions, and its quanta are generally built via a custom quantum-graph builder defined in the same module.

Methods Summary

run(**kwargs)

Run task algorithm on in-memory data.

Methods Documentation

run(**kwargs: Any) Struct#

Run task algorithm on in-memory data.

This method should be implemented in a subclass. This method will receive keyword-only arguments whose names will be the same as names of connection fields describing input dataset types. Argument values will be data objects retrieved from data butler. If a dataset type is configured with multiple field set to True then the argument value will be a list of objects, otherwise it will be a single object.

If the task needs to know its input or output DataIds then it also has to override the runQuantum method.

This method should return a Struct whose attributes share the same name as the connection fields describing output dataset types.

Parameters#

**kwargsAny

Arbitrary parameters accepted by subclasses.

Returns#

structStruct

Struct with attribute names corresponding to output connection fields.

Examples#

Typical implementation of this method may look like:

def run(self, *, input, calib):
    # "input", "calib", and "output" are the names of the
    # connection fields.

    # Assuming that input/calib datasets are `scalar` they are
    # simple objects, do something with inputs and calibs, produce
    # output image.
    image = self.makeImage(input, calib)

    # If output dataset is `scalar` then return object, not list
    return Struct(output=image)