FractionUpdatedDiaObjectsMetricTask

class lsst.ap.association.metrics.FractionUpdatedDiaObjectsMetricTask(**kwargs)

Bases: MetadataMetricTask

Task that computes the fraction of previously created DIAObjects that have a new association in this image, visit, etc..

Attributes Summary

canMultiprocess

Methods Summary

adaptArgsAndRun(inputData, inputDataIds, ...)

A wrapper around run used by MetricsControllerTask.

addStandardMetadata(measurement, outputDataId)

Add data ID-specific metadata required for all metrics.

areInputDatasetsScalar(config)

Return input dataset multiplicity.

emptyMetadata()

Empty (clear) the metadata for this Task and all sub-Tasks.

extractMetadata(metadata, metadataKeys)

Read multiple keys from a metadata object.

getAllSchemaCatalogs()

Get schema catalogs for all tasks in the hierarchy, combining the results into a single dict.

getFullMetadata()

Get metadata for all tasks.

getFullName()

Get the task name as a hierarchical name including parent task names.

getInputDatasetTypes(config)

Return input dataset types for this task.

getInputMetadataKeys(config)

Return the metadata keys read by this task.

getName()

Get the name of the task.

getResourceConfig()

Return resource configuration for this task.

getSchemaCatalogs()

Get the schemas generated by this task.

getTaskDict()

Get a dictionary of all tasks as a shallow copy.

makeField(doc)

Make a lsst.pex.config.ConfigurableField for this task.

makeMeasurement(values)

Compute the number of non-updated DIAObjects.

makeSubtask(name, **keyArgs)

Create a subtask as a new instance as the name attribute of this task.

run(metadata)

Compute a measurement from science task metadata.

runQuantum(butlerQC, inputRefs, outputRefs)

Do Butler I/O to provide in-memory objects for run.

timer(name[, logLevel])

Context manager to log performance data for an arbitrary block of code.

Attributes Documentation

canMultiprocess: ClassVar[bool] = True

Methods Documentation

adaptArgsAndRun(inputData, inputDataIds, outputDataId)

A wrapper around run used by MetricsControllerTask.

Task developers should not need to call or override this method.

Parameters:
inputDatadict from str to any

Dictionary whose keys are the names of input parameters and values are Python-domain data objects (or lists of objects) retrieved from data butler. Input objects may be None to represent missing data.

inputDataIdsdict from str to list of dataId

Dictionary whose keys are the names of input parameters and values are data IDs (or lists of data IDs) that the task consumes for corresponding dataset type. Data IDs are guaranteed to match data objects in inputData.

outputDataIddict from str to dataId

Dictionary containing a single key, "measurement", which maps to a single data ID for the measurement. The data ID must have the same granularity as the metric.

Returns:
structlsst.pipe.base.Struct

A Struct containing at least the following component:

  • measurement: the value of the metric, computed from inputData (lsst.verify.Measurement or None). The measurement is guaranteed to contain not only the value of the metric, but also any mandatory supplementary information.

Raises:
lsst.verify.tasks.MetricComputationError

Raised if an algorithmic or system error prevents calculation of the metric. Examples include corrupted input data or unavoidable exceptions raised by analysis code. The MetricComputationError should be chained to a more specific exception describing the root cause.

Not having enough data for a metric to be applicable is not an error, and should not trigger this exception.

Notes

This implementation calls run on the contents of inputData, followed by calling addStandardMetadata on the result before returning it.

Examples

Consider a metric that characterizes PSF variations across the entire field of view, given processed images. Then, if run has the signature run(images):

inputData = {'images': [image1, image2, ...]}
inputDataIds = {'images': [{'visit': 42, 'ccd': 1},
                           {'visit': 42, 'ccd': 2},
                           ...]}
outputDataId = {'measurement': {'visit': 42}}
result = task.adaptArgsAndRun(
    inputData, inputDataIds, outputDataId)
addStandardMetadata(measurement, outputDataId)

Add data ID-specific metadata required for all metrics.

This method currently does not add any metadata, but may do so in the future.

Parameters:
measurementlsst.verify.Measurement

The Measurement that the metadata are added to.

outputDataIddataId

The data ID to which the measurement applies, at the appropriate level of granularity.

Notes

This method should not be overridden by subclasses.

This method is not responsible for shared metadata like the execution environment (which should be added by this MetricTask’s caller), nor for metadata specific to a particular metric (which should be added when the metric is calculated).

Warning

This method’s signature will change whenever additional data needs to be provided. This is a deliberate restriction to ensure that all subclasses pass in the new data as well.

classmethod areInputDatasetsScalar(config)

Return input dataset multiplicity.

Parameters:
configcls.ConfigClass

Configuration for this task.

Returns:
datasetsDict [str, bool]

Dictionary where the key is the name of the input dataset (must match a parameter to run) and the value is True if run takes only one object and False if it takes a list.

Notes

The default implementation extracts a PipelineTaskConnections object from config.

emptyMetadata() None

Empty (clear) the metadata for this Task and all sub-Tasks.

static extractMetadata(metadata, metadataKeys)

Read multiple keys from a metadata object.

Parameters:
metadatalsst.pipe.base.TaskMetadata

A metadata object, assumed not None.

metadataKeysdict [str, str]

Keys are arbitrary labels, values are metadata keys (or their substrings) in the format of lsst.pipe.base.Task.getFullMetadata().

Returns:
metadataValuesdict [str, any]

Keys are the same as for metadataKeys, values are the value of each metadata key, or None if no matching key was found.

Raises:
lsst.verify.tasks.MetricComputationError

Raised if any metadata key string has more than one match in metadata.

getAllSchemaCatalogs() Dict[str, Any]

Get schema catalogs for all tasks in the hierarchy, combining the results into a single dict.

Returns:
schemacatalogsdict

Keys are butler dataset type, values are a empty catalog (an instance of the appropriate lsst.afw.table Catalog type) for all tasks in the hierarchy, from the top-level task down through all subtasks.

Notes

This method may be called on any task in the hierarchy; it will return the same answer, regardless.

The default implementation should always suffice. If your subtask uses schemas the override Task.getSchemaCatalogs, not this method.

getFullMetadata() TaskMetadata

Get metadata for all tasks.

Returns:
metadataTaskMetadata

The keys are the full task name. Values are metadata for the top-level task and all subtasks, sub-subtasks, etc.

Notes

The returned metadata includes timing information (if @timer.timeMethod is used) and any metadata set by the task. The name of each item consists of the full task name with . replaced by :, followed by . and the name of the item, e.g.:

topLevelTaskName:subtaskName:subsubtaskName.itemName

using : in the full task name disambiguates the rare situation that a task has a subtask and a metadata item with the same name.

getFullName() str

Get the task name as a hierarchical name including parent task names.

Returns:
fullNamestr

The full name consists of the name of the parent task and each subtask separated by periods. For example:

  • The full name of top-level task “top” is simply “top”.

  • The full name of subtask “sub” of top-level task “top” is “top.sub”.

  • The full name of subtask “sub2” of subtask “sub” of top-level task “top” is “top.sub.sub2”.

classmethod getInputDatasetTypes(config)

Return input dataset types for this task.

Parameters:
configcls.ConfigClass

Configuration for this task.

Returns:
datasetsdict from str to str

Dictionary where the key is the name of the input dataset (must match a parameter to run) and the value is the name of its Butler dataset type.

Notes

The default implementation extracts a PipelineTaskConnections object from config.

classmethod getInputMetadataKeys(config)

Return the metadata keys read by this task.

Parameters:
configcls.ConfigClass

Configuration for this task.

Returns:
keysdict [str, str]

The keys are the (arbitrary) names of values to use in task code, the values are the metadata keys to be looked up (see the metadataKeys parameter to extractMetadata). Metadata keys are assumed to include task prefixes in the format of lsst.pipe.base.Task.getFullMetadata(). This method may return a substring of the desired (full) key, but the string must match a unique metadata key.

getName() str

Get the name of the task.

Returns:
taskNamestr

Name of the task.

See also

getFullName
getResourceConfig() Optional[ResourceConfig]

Return resource configuration for this task.

Returns:
Object of type ResourceConfig or None if resource
configuration is not defined for this task.
getSchemaCatalogs() Dict[str, Any]

Get the schemas generated by this task.

Returns:
schemaCatalogsdict

Keys are butler dataset type, values are an empty catalog (an instance of the appropriate lsst.afw.table Catalog type) for this task.

See also

Task.getAllSchemaCatalogs

Notes

Warning

Subclasses that use schemas must override this method. The default implementation returns an empty dict.

This method may be called at any time after the Task is constructed, which means that all task schemas should be computed at construction time, not when data is actually processed. This reflects the philosophy that the schema should not depend on the data.

Returning catalogs rather than just schemas allows us to save e.g. slots for SourceCatalog as well.

getTaskDict() Dict[str, ReferenceType[Task]]

Get a dictionary of all tasks as a shallow copy.

Returns:
taskDictdict

Dictionary containing full task name: task object for the top-level task and all subtasks, sub-subtasks, etc.

classmethod makeField(doc: str) ConfigurableField

Make a lsst.pex.config.ConfigurableField for this task.

Parameters:
docstr

Help text for the field.

Returns:
configurableFieldlsst.pex.config.ConfigurableField

A ConfigurableField for this task.

Examples

Provides a convenient way to specify this task is a subtask of another task.

Here is an example of use:

class OtherTaskConfig(lsst.pex.config.Config):
    aSubtask = ATaskClass.makeField("brief description of task")
makeMeasurement(values)

Compute the number of non-updated DIAObjects.

AssociationTask reports each pre-existing DIAObject as either updated (associated with a new DIASource) or unassociated.

Parameters:
valuesdict [str, int or None]

A dict representation of the metadata. Each dict has the following keys:

"updatedObjects"

The number of DIAObjects updated for this image (int or None). May be None if the image was not successfully associated.

"unassociatedObjects"

The number of DIAObjects not associated with a DiaSource in this image (int or None). May be None if the image was not successfully associated.

Returns:
measurementlsst.verify.Measurement or None

The total number of unassociated objects.

makeSubtask(name: str, **keyArgs: Any) None

Create a subtask as a new instance as the name attribute of this task.

Parameters:
namestr

Brief name of the subtask.

keyArgs

Extra keyword arguments used to construct the task. The following arguments are automatically provided and cannot be overridden:

  • “config”.

  • “parentTask”.

Notes

The subtask must be defined by Task.config.name, an instance of ConfigurableField or RegistryField.

run(metadata)

Compute a measurement from science task metadata.

Parameters:
metadatalsst.pipe.base.TaskMetadata or None

A metadata object for the unit of science processing to use for this metric, or a collection of such objects if this task combines many units of processing into a single metric.

Returns:
resultlsst.pipe.base.Struct

A Struct containing the following component:

Raises:
lsst.verify.tasks.MetricComputationError

Raised if the strings returned by getInputMetadataKeys match more than one key in any metadata object.

Notes

This implementation calls getInputMetadataKeys, then searches for matching keys in each metadata. It then passes the values of these keys (or None if no match) to makeMeasurement, and returns its result to the caller.

runQuantum(butlerQC, inputRefs, outputRefs)

Do Butler I/O to provide in-memory objects for run.

This specialization of runQuantum performs error-handling specific to MetricTasks. Most or all of this functionality may be moved to activators in the future.

timer(name: str, logLevel: int = 10) Iterator[None]

Context manager to log performance data for an arbitrary block of code.

Parameters:
namestr

Name of code being timed; data will be logged using item name: Start and End.

logLevel

A logging level constant.

See also

timer.logInfo

Examples

Creating a timer context:

with self.timer("someCodeToTime"):
    pass  # code to time