ApdbMetricTask¶
-
class
lsst.verify.tasks.
ApdbMetricTask
(**kwargs)¶ Bases:
lsst.verify.tasks.MetricTask
A base class for tasks that compute metrics from an alert production database.
Parameters: - **kwargs
Constructor parameters are the same as for
lsst.pipe.base.PipelineTask
.
Notes
This class should be customized by overriding
makeMeasurement
. You should not need to overriderun
.Attributes Summary
canMultiprocess
Methods Summary
adaptArgsAndRun
(inputData, inputDataIds, …)A wrapper around run
used byMetricsControllerTask
.addStandardMetadata
(measurement, outputDataId)Add data ID-specific metadata required for all metrics. areInputDatasetsScalar
(config)Return input dataset multiplicity. emptyMetadata
()Empty (clear) the metadata for this Task and all sub-Tasks. getAllSchemaCatalogs
()Get schema catalogs for all tasks in the hierarchy, combining the results into a single dict. getFullMetadata
()Get metadata for all tasks. getFullName
()Get the task name as a hierarchical name including parent task names. getInputDatasetTypes
(config)Return input dataset types for this task. getName
()Get the name of the task. getResourceConfig
()Return resource configuration for this task. getSchemaCatalogs
()Get the schemas generated by this task. getTaskDict
()Get a dictionary of all tasks as a shallow copy. makeField
(doc)Make a lsst.pex.config.ConfigurableField
for this task.makeMeasurement
(dbHandle, outputDataId)Compute the metric from database data. makeSubtask
(name, **keyArgs)Create a subtask as a new instance as the name
attribute of this task.run
(dbInfo[, outputDataId])Compute a measurement from a database. runQuantum
(butlerQC, inputRefs, outputRefs)Do Butler I/O to provide in-memory objects for run. timer
(name[, logLevel])Context manager to log performance data for an arbitrary block of code. Attributes Documentation
-
canMultiprocess
= True¶
Methods Documentation
-
adaptArgsAndRun
(inputData, inputDataIds, outputDataId)¶ A wrapper around
run
used byMetricsControllerTask
.Task developers should not need to call or override this method.
Parameters: - inputData :
dict
fromstr
to any Dictionary whose keys are the names of input parameters and values are Python-domain data objects (or lists of objects) retrieved from data butler. Input objects may be
None
to represent missing data.- inputDataIds :
dict
fromstr
tolist
of dataId Dictionary whose keys are the names of input parameters and values are data IDs (or lists of data IDs) that the task consumes for corresponding dataset type. Data IDs are guaranteed to match data objects in
inputData
.- outputDataId :
dict
fromstr
to dataId Dictionary containing a single key,
"measurement"
, which maps to a single data ID for the measurement. The data ID must have the same granularity as the metric.
Returns: - struct :
lsst.pipe.base.Struct
A
Struct
containing at least the following component:measurement
: the value of the metric, computed frominputData
(lsst.verify.Measurement
orNone
). The measurement is guaranteed to contain not only the value of the metric, but also any mandatory supplementary information.
Raises: - lsst.verify.tasks.MetricComputationError
Raised if an algorithmic or system error prevents calculation of the metric. Examples include corrupted input data or unavoidable exceptions raised by analysis code. The
MetricComputationError
should be chained to a more specific exception describing the root cause.Not having enough data for a metric to be applicable is not an error, and should not trigger this exception.
Notes
This implementation calls
run
on the contents ofinputData
, followed by callingaddStandardMetadata
on the result before returning it.Examples
Consider a metric that characterizes PSF variations across the entire field of view, given processed images. Then, if
run
has the signaturerun(images)
:inputData = {'images': [image1, image2, ...]} inputDataIds = {'images': [{'visit': 42, 'ccd': 1}, {'visit': 42, 'ccd': 2}, ...]} outputDataId = {'measurement': {'visit': 42}} result = task.adaptArgsAndRun( inputData, inputDataIds, outputDataId)
- inputData :
-
addStandardMetadata
(measurement, outputDataId)¶ Add data ID-specific metadata required for all metrics.
This method currently does not add any metadata, but may do so in the future.
Parameters: - measurement :
lsst.verify.Measurement
The
Measurement
that the metadata are added to.- outputDataId :
dataId
The data ID to which the measurement applies, at the appropriate level of granularity.
Notes
This method should not be overridden by subclasses.
This method is not responsible for shared metadata like the execution environment (which should be added by this
MetricTask
’s caller), nor for metadata specific to a particular metric (which should be added when the metric is calculated).Warning
This method’s signature will change whenever additional data needs to be provided. This is a deliberate restriction to ensure that all subclasses pass in the new data as well.
- measurement :
-
classmethod
areInputDatasetsScalar
(config)¶ Return input dataset multiplicity.
Parameters: - config :
cls.ConfigClass
Configuration for this task.
Returns: Notes
The default implementation extracts a
PipelineTaskConnections
object fromconfig
.- config :
-
emptyMetadata
()¶ Empty (clear) the metadata for this Task and all sub-Tasks.
-
getAllSchemaCatalogs
()¶ Get schema catalogs for all tasks in the hierarchy, combining the results into a single dict.
Returns: - schemacatalogs :
dict
Keys are butler dataset type, values are a empty catalog (an instance of the appropriate
lsst.afw.table
Catalog type) for all tasks in the hierarchy, from the top-level task down through all subtasks.
Notes
This method may be called on any task in the hierarchy; it will return the same answer, regardless.
The default implementation should always suffice. If your subtask uses schemas the override
Task.getSchemaCatalogs
, not this method.- schemacatalogs :
-
getFullMetadata
()¶ Get metadata for all tasks.
Returns: - metadata :
lsst.daf.base.PropertySet
The
PropertySet
keys are the full task name. Values are metadata for the top-level task and all subtasks, sub-subtasks, etc.
Notes
The returned metadata includes timing information (if
@timer.timeMethod
is used) and any metadata set by the task. The name of each item consists of the full task name with.
replaced by:
, followed by.
and the name of the item, e.g.:topLevelTaskName:subtaskName:subsubtaskName.itemName
using
:
in the full task name disambiguates the rare situation that a task has a subtask and a metadata item with the same name.- metadata :
-
getFullName
()¶ Get the task name as a hierarchical name including parent task names.
Returns: - fullName :
str
The full name consists of the name of the parent task and each subtask separated by periods. For example:
- The full name of top-level task “top” is simply “top”.
- The full name of subtask “sub” of top-level task “top” is “top.sub”.
- The full name of subtask “sub2” of subtask “sub” of top-level task “top” is “top.sub.sub2”.
- fullName :
-
classmethod
getInputDatasetTypes
(config)¶ Return input dataset types for this task.
Parameters: - config :
cls.ConfigClass
Configuration for this task.
Returns: Notes
The default implementation extracts a
PipelineTaskConnections
object fromconfig
.- config :
-
getResourceConfig
()¶ Return resource configuration for this task.
Returns: - Object of type `~config.ResourceConfig` or ``None`` if resource
- configuration is not defined for this task.
-
getSchemaCatalogs
()¶ Get the schemas generated by this task.
Returns: - schemaCatalogs :
dict
Keys are butler dataset type, values are an empty catalog (an instance of the appropriate
lsst.afw.table
Catalog type) for this task.
See also
Task.getAllSchemaCatalogs
Notes
Warning
Subclasses that use schemas must override this method. The default implementation returns an empty dict.
This method may be called at any time after the Task is constructed, which means that all task schemas should be computed at construction time, not when data is actually processed. This reflects the philosophy that the schema should not depend on the data.
Returning catalogs rather than just schemas allows us to save e.g. slots for SourceCatalog as well.
- schemaCatalogs :
-
getTaskDict
()¶ Get a dictionary of all tasks as a shallow copy.
Returns: - taskDict :
dict
Dictionary containing full task name: task object for the top-level task and all subtasks, sub-subtasks, etc.
- taskDict :
-
classmethod
makeField
(doc)¶ Make a
lsst.pex.config.ConfigurableField
for this task.Parameters: - doc :
str
Help text for the field.
Returns: - configurableField :
lsst.pex.config.ConfigurableField
A
ConfigurableField
for this task.
Examples
Provides a convenient way to specify this task is a subtask of another task.
Here is an example of use:
class OtherTaskConfig(lsst.pex.config.Config): aSubtask = ATaskClass.makeField("brief description of task")
- doc :
-
makeMeasurement
(dbHandle, outputDataId)¶ Compute the metric from database data.
Parameters: - dbHandle :
lsst.dax.apdb.Apdb
A database instance.
- outputDataId : any data ID type
The subset of the database to which this measurement applies. May be empty to represent the entire dataset.
Returns: - measurement :
lsst.verify.Measurement
orNone
The measurement corresponding to the input data.
Raises: - MetricComputationError
Raised if an algorithmic or system error prevents calculation of the metric. See
run
for expected behavior.
- dbHandle :
-
makeSubtask
(name, **keyArgs)¶ Create a subtask as a new instance as the
name
attribute of this task.Parameters: - name :
str
Brief name of the subtask.
- keyArgs
Extra keyword arguments used to construct the task. The following arguments are automatically provided and cannot be overridden:
- “config”.
- “parentTask”.
Notes
The subtask must be defined by
Task.config.name
, an instance ofConfigurableField
orRegistryField
.- name :
-
run
(dbInfo, outputDataId={})¶ Compute a measurement from a database.
Parameters: - dbInfo :
list
The datasets (of the type indicated by the config) from which to load the database. If more than one dataset is provided (as may be the case if DB writes are fine-grained), all are assumed identical.
- outputDataId: any data ID type, optional
The output data ID for the metric value. Defaults to the empty ID, representing a value that covers the entire dataset.
Returns: - result :
lsst.pipe.base.Struct
Result struct with component:
measurement
the value of the metric (
lsst.verify.Measurement
orNone
)
Raises: - MetricComputationError
Raised if an algorithmic or system error prevents calculation of the metric.
Notes
This implementation calls
dbLoader
to acquire a database handle (takingNone
if no input), then passes it and the value ofoutputDataId
tomakeMeasurement
. The result ofmakeMeasurement
is returned to the caller.- dbInfo :
-
runQuantum
(butlerQC, inputRefs, outputRefs)¶ Do Butler I/O to provide in-memory objects for run.
This specialization of runQuantum passes the output data ID to
run
.
-
timer
(name, logLevel=10000)¶ Context manager to log performance data for an arbitrary block of code.
Parameters: - name :
str
Name of code being timed; data will be logged using item name:
Start
andEnd
.- logLevel
A
lsst.log
level constant.
See also
timer.logInfo
Examples
Creating a timer context:
with self.timer("someCodeToTime"): pass # code to time
- name :