ApdbMetricConnections¶
- 
class lsst.verify.tasks.ApdbMetricConnections(*, config: 'PipelineTaskConfig' | None = None)¶
- Bases: - lsst.verify.tasks.MetricConnections- An abstract connections class defining a database input. - Notes - ApdbMetricConnectionsdefines the following dataset templates:
- package
- Name of the metric’s namespace. By verify_metrics convention, this is the name of the package the metric is most closely associated with.
- metric
- Name of the metric, excluding any namespace.
 
 - Attributes Summary - allConnections- dbInfo- defaultTemplates- dimensions- initInputs- initOutputs- inputs- measurement- outputs- prerequisiteInputs- Methods Summary - adjustQuantum(inputs, …)- Override to make adjustments to - lsst.daf.butler.DatasetRefobjects in the- lsst.daf.butler.core.Quantumduring the graph generation stage of the activator.- buildDatasetRefs(quantum)- Builds QuantizedConnections corresponding to input Quantum - Attributes Documentation - 
allConnections= {'dbInfo': Input(name='apdb_marker', storageClass='Config', doc='The dataset from which an APDB instance can be constructed by `dbLoader`. By default this is assumed to be a marker produced by AP processing.', multiple=True, dimensions={'instrument', 'detector', 'visit'}, isCalibration=False, deferLoad=False, minimum=1), 'measurement': Output(name='metricvalue_{package}_{metric}', storageClass='MetricValue', doc='The metric value computed by this task.', multiple=False, dimensions={'instrument'}, isCalibration=False)}¶
 - 
dbInfo¶
 - 
defaultTemplates= {'metric': None, 'package': None}¶
 - 
dimensions= {'instrument'}¶
 - 
initInputs= frozenset()¶
 - 
initOutputs= frozenset()¶
 - 
inputs= frozenset({'dbInfo'})¶
 - 
measurement¶
 - 
outputs= frozenset({'measurement'})¶
 - 
prerequisiteInputs= frozenset()¶
 - Methods Documentation - 
adjustQuantum(inputs: Dict[str, Tuple[lsst.pipe.base.connectionTypes.BaseInput, Collection[lsst.daf.butler.core.datasets.ref.DatasetRef]]], outputs: Dict[str, Tuple[lsst.pipe.base.connectionTypes.Output, Collection[lsst.daf.butler.core.datasets.ref.DatasetRef]]], label: str, data_id: lsst.daf.butler.core.dimensions._coordinate.DataCoordinate) → Tuple[Mapping[str, Tuple[lsst.pipe.base.connectionTypes.BaseInput, Collection[lsst.daf.butler.core.datasets.ref.DatasetRef]]], Mapping[str, Tuple[lsst.pipe.base.connectionTypes.Output, Collection[lsst.daf.butler.core.datasets.ref.DatasetRef]]]]¶
- Override to make adjustments to - lsst.daf.butler.DatasetRefobjects in the- lsst.daf.butler.core.Quantumduring the graph generation stage of the activator.- Parameters: - inputs : dict
- Dictionary whose keys are an input (regular or prerequisite) connection name and whose values are a tuple of the connection instance and a collection of associated - DatasetRefobjects. The exact type of the nested collections is unspecified; it can be assumed to be multi-pass iterable and support- lenand- in, but it should not be mutated in place. In contrast, the outer dictionaries are guaranteed to be temporary copies that are true- dictinstances, and hence may be modified and even returned; this is especially useful for delegating to- super(see notes below).
- outputs : Mapping
- Mapping of output datasets, with the same structure as - inputs.
- label : str
- Label for this task in the pipeline (should be used in all diagnostic messages). 
- data_id : lsst.daf.butler.DataCoordinate
- Data ID for this quantum in the pipeline (should be used in all diagnostic messages). 
 - Returns: - adjusted_inputs : Mapping
- Mapping of the same form as - inputswith updated containers of input- DatasetRefobjects. Connections that are not changed should not be returned at all. Datasets may only be removed, not added. Nested collections may be of any multi-pass iterable type, and the order of iteration will set the order of iteration within- PipelineTask.runQuantum.
- adjusted_outputs : Mapping
- Mapping of updated output datasets, with the same structure and interpretation as - adjusted_inputs.
 - Raises: - ScalarError
- Raised if any - Inputor- PrerequisiteInputconnection has- multipleset to- False, but multiple datasets.
- NoWorkFound
- Raised to indicate that this quantum should not be run; not enough datasets were found for a regular - Inputconnection, and the quantum should be pruned or skipped.
- FileNotFoundError
- Raised to cause QuantumGraph generation to fail (with the message included in this exception); not enough datasets were found for a - PrerequisiteInputconnection.
 - Notes - The base class implementation performs important checks. It always returns an empty mapping (i.e. makes no adjustments). It should always called be via - superby custom implementations, ideally at the end of the custom implementation with already-adjusted mappings when any datasets are actually dropped, e.g.:- def adjustQuantum(self, inputs, outputs, label, data_id): # Filter out some dataset refs for one connection. connection, old_refs = inputs["my_input"] new_refs = [ref for ref in old_refs if ...] adjusted_inputs = {"my_input", (connection, new_refs)} # Update the original inputs so we can pass them to super. inputs.update(adjusted_inputs) # Can ignore outputs from super because they are guaranteed # to be empty. super().adjustQuantum(inputs, outputs, label_data_id) # Return only the connections we modified. return adjusted_inputs, {} - Removing outputs here is guaranteed to affect what is actually passed to - PipelineTask.runQuantum, but its effect on the larger graph may be deferred to execution, depending on the context in which- adjustQuantumis being run: if one quantum removes an output that is needed by a second quantum as input, the second quantum may not be adjusted (and hence pruned or skipped) until that output is actually found to be missing at execution time.- Tasks that desire zip-iteration consistency between any combinations of connections that have the same data ID should generally implement - adjustQuantumto achieve this, even if they could also run that logic during execution; this allows the system to see outputs that will not be produced because the corresponding input is missing as early as possible.
- inputs : 
 - 
buildDatasetRefs(quantum: lsst.daf.butler.core.quantum.Quantum) → Tuple[lsst.pipe.base.connections.InputQuantizedConnection, lsst.pipe.base.connections.OutputQuantizedConnection]¶
- Builds QuantizedConnections corresponding to input Quantum - Parameters: - quantum : lsst.daf.butler.Quantum
- Quantum object which defines the inputs and outputs for a given unit of processing 
 - Returns: - retVal : tupleof (InputQuantizedConnection,
- OutputQuantizedConnection) Namespaces mapping attribute names (identifiers of connections) to butler references defined in the input- lsst.daf.butler.Quantum
 
- quantum :