MetricTask (lsst.verify.tasks)

MetricTask is a base class for generating lsst.verify.Measurement given input data. Each MetricTask class accepts specific type(s) of datasets and produces measurements for a specific metric or family of metrics.

MetricTask is a PipelineTask and can be executed as part of pipelines. In Gen 2, MetricTask can be run as a plugin to lsst.verify.gen2tasks.MetricsControllerTask.

Python API summary

from lsst.verify.tasks.metricTask import MetricTask
classMetricTask(**kwargs)

A base class for tasks that compute one metric from input datasets...

attributeconfig

Access configuration fields and retargetable subtasks.

methodrun(**kwargs)

Run the MetricTask on in-memory data...

See also

See the MetricTask API reference for complete details.

Butler datasets

Output datasets

measurement

The value of the metric. The dataset type should not be configured directly, but should be set changing the package and metric template variables to the metric’s namespace (package, by convention) and in-package name, respectively. MetricTask subclasses that only support one metric should set these variables automatically.

Retargetable subtasks

No subtasks.

Configuration fields

connections

Data type

lsst.pipe.base.config.Connections

Field type

ConfigField

Configurations describing the connections of the PipelineTask to datatypes

saveLogOutput

Default
True
Field type

bool Field

Flag to enable/disable saving of log output for a task, enabled by default.

saveMetadata

Default
True
Field type

bool Field

Flag to enable/disable metadata saving for a task, enabled by default.

In Depth

Subclassing

MetricTask is primarily customized using the run method.

The task config should use lsst.pipe.base.PipelineTaskConnections to identify input datasets; MetricConfig handles the output dataset. Only the name and multiple fields are used in a Gen 2 context.

Error Handling

In general, a MetricTask may run in three cases:

  1. the task can compute the metric without incident.

  2. the task does not have the datasets required to compute the metric. This often happens if the user runs generic metric configurations on arbitrary pipelines, or if they make changes to the pipeline configuration that enable or disable processing steps. More rarely, it can happen when trying to compute diagnostic metrics on incomplete (i.e., failed) pipeline runs.

  3. the task has the data it needs, but cannot compute the metric. This could be because the data are corrupted, because the selected algorithm fails, or because the metric is ill-defined given the data.

A MetricTask must distinguish between these cases so that MetricsControllerTask and future calling frameworks can handle them appropriately. A task for a metric that does not apply to a particular pipeline run (case 2) must return None in place of a Measurement. A task that cannot give a valid result (case 3) must raise MetricComputationError.

In grey areas, developers should choose a MetricTask’s behavior based on whether the root cause is closer to case 2 or case 3. For example, TimingMetricTask accepts top-level task metadata as input, but returns None if it can’t find metadata for the subtask it is supposed to time. While the input dataset is available, the subtask metadata are most likely missing because the subtask was never run, making the situation equivalent to case 2. On the other hand, metadata with nonsense values falls squarely under case 3.

Registration

The most common way to run MetricTask in Gen 2 is as plugins to MetricsControllerTask. Most MetricTask classes should use the register decorator to assign a plugin name.

Because of implementation limitations, each registered name may appear at most once in MetricsControllerConfig. If you expect to need multiple instances of the same MetricTask class (typically when the same class can compute multiple metrics), it must have the registerMultiple decorator instead.