TotalUnassociatedDiaObjectsMetricTask

class lsst.ap.association.metrics.TotalUnassociatedDiaObjectsMetricTask(**kwargs)

Bases: lsst.verify.tasks.PpdbMetricTask

Task that computes the number of DIAObjects with only one associated DIASource.

Methods Summary

adaptArgsAndRun(inputData, inputDataIds, …) Compute a measurement from a database.
addStandardMetadata(measurement, outputDataId) Add data ID-specific metadata required for all metrics.
areInputDatasetsScalar(config) Return input dataset multiplicity.
emptyMetadata() Empty (clear) the metadata for this Task and all sub-Tasks.
getAllSchemaCatalogs() Get schema catalogs for all tasks in the hierarchy, combining the results into a single dict.
getFullMetadata() Get metadata for all tasks.
getFullName() Get the task name as a hierarchical name including parent task names.
getInputDatasetTypes(config) Return input dataset types for this task.
getName() Get the name of the task.
getOutputMetricName(config) Identify the metric calculated by this MetricTask.
getSchemaCatalogs() Get the schemas generated by this task.
getTaskDict() Get a dictionary of all tasks as a shallow copy.
makeField(doc) Make a lsst.pex.config.ConfigurableField for this task.
makeMeasurement(dbHandle, outputDataId) Compute the number of unassociated DIAObjects.
makeSubtask(name, **keyArgs) Create a subtask as a new instance as the name attribute of this task.
run(dbInfo) Compute a measurement from a database.
timer(name[, logLevel]) Context manager to log performance data for an arbitrary block of code.

Methods Documentation

adaptArgsAndRun(inputData, inputDataIds, outputDataId)

Compute a measurement from a database.

Parameters:
inputData : dict [str, any]

Dictionary with one key:

"dbInfo"

The dataset (of the type indicated by getInputDatasetTypes) from which to load the database.

inputDataIds : dict [str, data ID]

Dictionary with one key:

"dbInfo"

The data ID of the input data. Since there can only be one prompt products database per dataset, the value must be an empty data ID.

outputDataId : dict [str, data ID]

Dictionary with one key:

"measurement"

The data ID for the measurement, at the appropriate level of granularity for the metric.

Returns:
result : lsst.pipe.base.Struct

Result struct with component:

measurement

the value of the metric computed over the portion of the dataset that matches outputDataId (lsst.verify.Measurement or None)

Raises:
lsst.verify.tasks.MetricComputationError

Raised if an algorithmic or system error prevents calculation of the metric.

Notes

This implementation calls dbLoader to acquire a database handle, then passes it and the value of outputDataId to makeMeasurement. The result of makeMeasurement is returned to the caller.

addStandardMetadata(measurement, outputDataId)

Add data ID-specific metadata required for all metrics.

This method currently does not add any metadata, but may do so in the future.

Parameters:
measurement : lsst.verify.Measurement

The Measurement that the metadata are added to.

outputDataId : dataId

The data ID to which the measurement applies, at the appropriate level of granularity.

Notes

This method must be called by any subclass that overrides adaptArgsAndRun, but should be ignored otherwise. It should not be overridden by subclasses.

This method is not responsible for shared metadata like the execution environment (which should be added by this MetricTask’s caller), nor for metadata specific to a particular metric (which should be added when the metric is calculated).

Warning

This method’s signature will change whenever additional data needs to be provided. This is a deliberate restriction to ensure that all subclasses pass in the new data as well.

classmethod areInputDatasetsScalar(config)

Return input dataset multiplicity.

Parameters:
config : cls.ConfigClass

Configuration for this task.

Returns:
datasets : Dict [str, bool]

Dictionary where the key is the name of the input dataset (must match a parameter to run) and the value is True if run takes only one object and False if it takes a list.

Notes

The default implementation extracts a PipelineTaskConnections object from config.

emptyMetadata()

Empty (clear) the metadata for this Task and all sub-Tasks.

getAllSchemaCatalogs()

Get schema catalogs for all tasks in the hierarchy, combining the results into a single dict.

Returns:
schemacatalogs : dict

Keys are butler dataset type, values are a empty catalog (an instance of the appropriate lsst.afw.table Catalog type) for all tasks in the hierarchy, from the top-level task down through all subtasks.

Notes

This method may be called on any task in the hierarchy; it will return the same answer, regardless.

The default implementation should always suffice. If your subtask uses schemas the override Task.getSchemaCatalogs, not this method.

getFullMetadata()

Get metadata for all tasks.

Returns:
metadata : lsst.daf.base.PropertySet

The PropertySet keys are the full task name. Values are metadata for the top-level task and all subtasks, sub-subtasks, etc..

Notes

The returned metadata includes timing information (if @timer.timeMethod is used) and any metadata set by the task. The name of each item consists of the full task name with . replaced by :, followed by . and the name of the item, e.g.:

topLevelTaskName:subtaskName:subsubtaskName.itemName

using : in the full task name disambiguates the rare situation that a task has a subtask and a metadata item with the same name.

getFullName()

Get the task name as a hierarchical name including parent task names.

Returns:
fullName : str

The full name consists of the name of the parent task and each subtask separated by periods. For example:

  • The full name of top-level task “top” is simply “top”.
  • The full name of subtask “sub” of top-level task “top” is “top.sub”.
  • The full name of subtask “sub2” of subtask “sub” of top-level task “top” is “top.sub.sub2”.
classmethod getInputDatasetTypes(config)

Return input dataset types for this task.

Parameters:
config : cls.ConfigClass

Configuration for this task.

Returns:
datasets : dict from str to str

Dictionary where the key is the name of the input dataset (must match a parameter to run) and the value is the name of its Butler dataset type.

Notes

The default implementation extracts a PipelineTaskConnections object from config.

getName()

Get the name of the task.

Returns:
taskName : str

Name of the task.

See also

getFullName

classmethod getOutputMetricName(config)

Identify the metric calculated by this MetricTask.

Parameters:
config : cls.ConfigClass

Configuration for this MetricTask.

Returns:
metric : lsst.verify.Name

The name of the metric computed by objects of this class when configured with config.

getSchemaCatalogs()

Get the schemas generated by this task.

Returns:
schemaCatalogs : dict

Keys are butler dataset type, values are an empty catalog (an instance of the appropriate lsst.afw.table Catalog type) for this task.

See also

Task.getAllSchemaCatalogs

Notes

Warning

Subclasses that use schemas must override this method. The default implemenation returns an empty dict.

This method may be called at any time after the Task is constructed, which means that all task schemas should be computed at construction time, not when data is actually processed. This reflects the philosophy that the schema should not depend on the data.

Returning catalogs rather than just schemas allows us to save e.g. slots for SourceCatalog as well.

getTaskDict()

Get a dictionary of all tasks as a shallow copy.

Returns:
taskDict : dict

Dictionary containing full task name: task object for the top-level task and all subtasks, sub-subtasks, etc..

classmethod makeField(doc)

Make a lsst.pex.config.ConfigurableField for this task.

Parameters:
doc : str

Help text for the field.

Returns:
configurableField : lsst.pex.config.ConfigurableField

A ConfigurableField for this task.

Examples

Provides a convenient way to specify this task is a subtask of another task.

Here is an example of use:

class OtherTaskConfig(lsst.pex.config.Config)
    aSubtask = ATaskClass.makeField("a brief description of what this task does")
makeMeasurement(dbHandle, outputDataId)

Compute the number of unassociated DIAObjects.

Parameters:
dbHandle : lsst.dax.ppdb.Ppdb

A database instance.

outputDataId : any data ID type

The subset of the database to which this measurement applies. Must be empty, as the number of unassociated sources is ill-defined for subsets of the dataset.

Returns:
measurement : lsst.verify.Measurement

The total number of unassociated objects.

Raises:
MetricComputationError

Raised on any failure to query the database.

ValueError

Raised if outputDataId is not empty

makeSubtask(name, **keyArgs)

Create a subtask as a new instance as the name attribute of this task.

Parameters:
name : str

Brief name of the subtask.

keyArgs

Extra keyword arguments used to construct the task. The following arguments are automatically provided and cannot be overridden:

  • “config”.
  • “parentTask”.

Notes

The subtask must be defined by Task.config.name, an instance of pex_config ConfigurableField or RegistryField.

run(dbInfo)

Compute a measurement from a database.

Parameters:
dbInfo

The dataset (of the type indicated by getInputDatasetTypes) from which to load the database.

Returns:
result : lsst.pipe.base.Struct

Result struct with component:

measurement

the value of the metric computed over the entire database (lsst.verify.Measurement or None)

Raises:
MetricComputationError

Raised if an algorithmic or system error prevents calculation of the metric.

Notes

This method is provided purely for compatibility with frameworks that don’t support adaptArgsAndRun. The latter method should be considered the primary entry point for this task, as it lets callers define metrics that apply to only a subset of the data.

timer(name, logLevel=10000)

Context manager to log performance data for an arbitrary block of code.

Parameters:
name : str

Name of code being timed; data will be logged using item name: Start and End.

logLevel

A lsst.log level constant.

See also

timer.logInfo

Examples

Creating a timer context:

with self.timer("someCodeToTime"):
    pass  # code to time