MetricConnections¶
- class lsst.verify.tasks.MetricConnections(*, config: 'PipelineTaskConfig' | None = None)¶
- Bases: - PipelineTaskConnections- An abstract connections class defining a metric output. - This class assumes detector-level metrics, which is the most common case. Subclasses can redeclare - measurementand- dimensionsto override this assumption.- Notes - MetricConnectionsdefines the following dataset templates:
- package
- Name of the metric’s namespace. By verify_metrics convention, this is the name of the package the metric is most closely associated with. 
- metric
- Name of the metric, excluding any namespace. 
 
 - Attributes Summary - Methods Summary - adjustQuantum(inputs, outputs, label, data_id)- Override to make adjustments to - lsst.daf.butler.DatasetRefobjects in the- lsst.daf.butler.core.Quantumduring the graph generation stage of the activator.- buildDatasetRefs(quantum)- Builds QuantizedConnections corresponding to input Quantum - Attributes Documentation - allConnections: Dict[str, BaseConnection] = {'measurement': Output(name='metricvalue_{package}_{metric}', storageClass='MetricValue', doc='The metric value computed by this task.', multiple=False, dimensions={'instrument', 'detector', 'visit'}, isCalibration=False)}¶
 - defaultTemplates = {'metric': None, 'package': None}¶
 - measurement¶
 - Methods Documentation - adjustQuantum(inputs: Dict[str, Tuple[BaseInput, Collection[DatasetRef]]], outputs: Dict[str, Tuple[Output, Collection[DatasetRef]]], label: str, data_id: DataCoordinate) Tuple[Mapping[str, Tuple[BaseInput, Collection[DatasetRef]]], Mapping[str, Tuple[Output, Collection[DatasetRef]]]]¶
- Override to make adjustments to - lsst.daf.butler.DatasetRefobjects in the- lsst.daf.butler.core.Quantumduring the graph generation stage of the activator.- Parameters:
- inputsdict
- Dictionary whose keys are an input (regular or prerequisite) connection name and whose values are a tuple of the connection instance and a collection of associated - DatasetRefobjects. The exact type of the nested collections is unspecified; it can be assumed to be multi-pass iterable and support- lenand- in, but it should not be mutated in place. In contrast, the outer dictionaries are guaranteed to be temporary copies that are true- dictinstances, and hence may be modified and even returned; this is especially useful for delegating to- super(see notes below).
- outputsMapping
- Mapping of output datasets, with the same structure as - inputs.
- labelstr
- Label for this task in the pipeline (should be used in all diagnostic messages). 
- data_idlsst.daf.butler.DataCoordinate
- Data ID for this quantum in the pipeline (should be used in all diagnostic messages). 
 
- inputs
- Returns:
- adjusted_inputsMapping
- Mapping of the same form as - inputswith updated containers of input- DatasetRefobjects. Connections that are not changed should not be returned at all. Datasets may only be removed, not added. Nested collections may be of any multi-pass iterable type, and the order of iteration will set the order of iteration within- PipelineTask.runQuantum.
- adjusted_outputsMapping
- Mapping of updated output datasets, with the same structure and interpretation as - adjusted_inputs.
 
- adjusted_inputs
- Raises:
- ScalarError
- Raised if any - Inputor- PrerequisiteInputconnection has- multipleset to- False, but multiple datasets.
- NoWorkFound
- Raised to indicate that this quantum should not be run; not enough datasets were found for a regular - Inputconnection, and the quantum should be pruned or skipped.
- FileNotFoundError
- Raised to cause QuantumGraph generation to fail (with the message included in this exception); not enough datasets were found for a - PrerequisiteInputconnection.
 
 - Notes - The base class implementation performs important checks. It always returns an empty mapping (i.e. makes no adjustments). It should always called be via - superby custom implementations, ideally at the end of the custom implementation with already-adjusted mappings when any datasets are actually dropped, e.g.:- def adjustQuantum(self, inputs, outputs, label, data_id): # Filter out some dataset refs for one connection. connection, old_refs = inputs["my_input"] new_refs = [ref for ref in old_refs if ...] adjusted_inputs = {"my_input", (connection, new_refs)} # Update the original inputs so we can pass them to super. inputs.update(adjusted_inputs) # Can ignore outputs from super because they are guaranteed # to be empty. super().adjustQuantum(inputs, outputs, label_data_id) # Return only the connections we modified. return adjusted_inputs, {} - Removing outputs here is guaranteed to affect what is actually passed to - PipelineTask.runQuantum, but its effect on the larger graph may be deferred to execution, depending on the context in which- adjustQuantumis being run: if one quantum removes an output that is needed by a second quantum as input, the second quantum may not be adjusted (and hence pruned or skipped) until that output is actually found to be missing at execution time.- Tasks that desire zip-iteration consistency between any combinations of connections that have the same data ID should generally implement - adjustQuantumto achieve this, even if they could also run that logic during execution; this allows the system to see outputs that will not be produced because the corresponding input is missing as early as possible.
 - buildDatasetRefs(quantum: Quantum) Tuple[InputQuantizedConnection, OutputQuantizedConnection]¶
- Builds QuantizedConnections corresponding to input Quantum - Parameters:
- quantumlsst.daf.butler.Quantum
- Quantum object which defines the inputs and outputs for a given unit of processing 
 
- quantum
- Returns:
- retValtupleof (InputQuantizedConnection,
- OutputQuantizedConnection) Namespaces mapping attribute names (identifiers of connections) to butler references defined in the input- lsst.daf.butler.Quantum
 
- retVal