CatalogMeasurementBaseTask#

class lsst.faro.base.CatalogMeasurementBaseTask(config, *args, **kwargs)#

Bases: MetricTask

Base class for science performance metrics measured from source/object catalogs.

Methods Summary

run(**kwargs)

Run the MetricTask on in-memory data.

Methods Documentation

run(**kwargs)#

Run the MetricTask on in-memory data.

Parameters#

**kwargs

Keyword arguments matching the inputs given in the class config; see lsst.pipe.base.PipelineTask.run for more details.

Returns#

structlsst.pipe.base.Struct

A Struct containing at least the following component:

  • measurement: the value of the metric (lsst.verify.Measurement or None). This method is not responsible for adding mandatory metadata (e.g., the data ID); this is handled by the caller. None may be used to indicate that a metric is undefined or irrelevant instead of raising NoWorkFound.

Raises#

lsst.verify.tasks.MetricComputationError

Raised if an algorithmic or system error prevents calculation of the metric. Examples include corrupted input data or unavoidable exceptions raised by analysis code. The MetricComputationError should be chained to a more specific exception describing the root cause.

Not having enough data for a metric to be applicable is not an error, and should raise NoWorkFound (see below) instead of this exception.

lsst.pipe.base.NoWorkFound

Raised if the metric is ill-defined or otherwise inapplicable to the data. Typically this means that the pipeline step or option being measured was not run.