UpdateVisitSummaryConnections

class lsst.drp.tasks.update_visit_summary.UpdateVisitSummaryConnections(*, config: PipelineTaskConfig | None = None)

Bases: PipelineTaskConnections

Attributes Summary

allConnections

Mapping holding all connection attributes.

ap_corr_overrides

Class used for declaring PipelineTask input connections

background_originals

Class used for declaring PipelineTask input connections

background_overrides

Class used for declaring PipelineTask input connections

defaultTemplates

dimensions

Set of dimension names that define the unit of work for this task.

initInputs

Set with the names of all InitInput connection attributes.

initOutputs

Set with the names of all InitOutput connection attributes.

input_exposures

Class used for declaring PipelineTask input connections

input_summary_catalog

Class used for declaring PipelineTask input connections

input_summary_schema

inputs

Set with the names of all connectionTypes.Input connection attributes.

output_summary_catalog

output_summary_schema

outputs

Set with the names of all Output connection attributes.

photo_calib_overrides_global

Class used for declaring PipelineTask input connections

photo_calib_overrides_tract

Class used for declaring PipelineTask input connections

prerequisiteInputs

Set with the names of all PrerequisiteInput connection attributes.

psf_overrides

Class used for declaring PipelineTask input connections

psf_star_catalog

Class used for declaring PipelineTask input connections

sky_map

Class used for declaring PipelineTask input connections

wcs_overrides_global

Class used for declaring PipelineTask input connections

wcs_overrides_tract

Class used for declaring PipelineTask input connections

Methods Summary

adjustQuantum(inputs, outputs, label, data_id)

Override to make adjustments to lsst.daf.butler.DatasetRef objects in the lsst.daf.butler.core.Quantum during the graph generation stage of the activator.

buildDatasetRefs(quantum)

Builds QuantizedConnections corresponding to input Quantum

Attributes Documentation

allConnections: Mapping[str, BaseConnection] = {'ap_corr_overrides': Input(name='finalized_psf_ap_corr_catalog', storageClass='ExposureCatalog', doc='Visit-level catalog of updated aperture correction maps to use.', multiple=False, dimensions=('instrument', 'visit'), isCalibration=False, deferLoad=False, minimum=1, deferGraphConstraint=True), 'background_originals': Input(name='calexpBackground', storageClass='Background', doc="Per-detector original background that has already been subtracted from 'input_exposures'.", multiple=True, dimensions=('instrument', 'visit', 'detector'), isCalibration=False, deferLoad=True, minimum=1, deferGraphConstraint=True), 'background_overrides': Input(name='skyCorr', storageClass='Background', doc="Per-detector background that can be subtracted directly from 'input_exposures'.", multiple=True, dimensions=('instrument', 'visit', 'detector'), isCalibration=False, deferLoad=True, minimum=1, deferGraphConstraint=True), 'input_exposures': Input(name='calexp', storageClass='ExposureF', doc='Per-detector images to obtain image, mask, and variance from (embedded summary stats and other components are ignored).', multiple=True, dimensions=('instrument', 'detector', 'visit'), isCalibration=False, deferLoad=True, minimum=1, deferGraphConstraint=True), 'input_summary_catalog': Input(name='visitSummary', storageClass='ExposureCatalog', doc='Visit summary table to load and modify.', multiple=False, dimensions=('instrument', 'visit'), isCalibration=False, deferLoad=False, minimum=1, deferGraphConstraint=False), 'input_summary_schema': InitInput(name='visitSummary_schema', storageClass='ExposureCatalog', doc='Schema for input_summary_catalog.', multiple=False), 'output_summary_catalog': Output(name='finalVisitSummary', storageClass='ExposureCatalog', doc='Visit-level catalog summarizing all image characterizations and calibrations.', multiple=False, dimensions=('instrument', 'visit'), isCalibration=False), 'output_summary_schema': InitOutput(name='finalVisitSummary_schema', storageClass='ExposureCatalog', doc='Schema of the output visit summary catalog.', multiple=False), 'photo_calib_overrides_global': Input(name='{photoCalibName}PhotoCalibCatalog', storageClass='ExposureCatalog', doc='Global visit-level catalog of updated photometric calibration objects to use.', multiple=False, dimensions=('instrument', 'visit'), isCalibration=False, deferLoad=False, minimum=1, deferGraphConstraint=True), 'photo_calib_overrides_tract': Input(name='{photoCalibName}PhotoCalibCatalog', storageClass='ExposureCatalog', doc='Per-Tract visit-level catalog of updated photometric calibration objects to use.', multiple=True, dimensions=('instrument', 'visit', 'tract'), isCalibration=False, deferLoad=False, minimum=1, deferGraphConstraint=True), 'psf_overrides': Input(name='finalized_psf_ap_corr_catalog', storageClass='ExposureCatalog', doc='Visit-level catalog of updated PSFs to use.', multiple=False, dimensions=('instrument', 'visit'), isCalibration=False, deferLoad=False, minimum=1, deferGraphConstraint=True), 'psf_star_catalog': Input(name='finalized_src_table', storageClass='DataFrame', doc='Per-visit table of PSF reserved- and used-star measurements.', multiple=False, dimensions=('instrument', 'visit'), isCalibration=False, deferLoad=False, minimum=1, deferGraphConstraint=True), 'sky_map': Input(name='skyMap', storageClass='SkyMap', doc='Description of tract/patch geometry.', multiple=False, dimensions=('skymap',), isCalibration=False, deferLoad=False, minimum=1, deferGraphConstraint=False), 'wcs_overrides_global': Input(name='{skyWcsName}SkyWcsCatalog', storageClass='ExposureCatalog', doc='Global visit-level catalog of updated astrometric calibration objects to use.', multiple=False, dimensions=('instrument', 'visit'), isCalibration=False, deferLoad=False, minimum=1, deferGraphConstraint=True), 'wcs_overrides_tract': Input(name='{skyWcsName}SkyWcsCatalog', storageClass='ExposureCatalog', doc='Per-tract visit-level catalog of updated astrometric calibration objects to use.', multiple=True, dimensions=('instrument', 'visit', 'tract'), isCalibration=False, deferLoad=False, minimum=1, deferGraphConstraint=True)}

Mapping holding all connection attributes.

This is a read-only view that is automatically updated when connection attributes are added, removed, or replaced in __init__. It is also updated after __init__ completes to reflect changes in inputs, prerequisiteInputs, outputs, initInputs, and initOutputs.

ap_corr_overrides

Class used for declaring PipelineTask input connections

Parameters:
namestr

The default name used to identify the dataset type

storageClassstr

The storage class used when (un)/persisting the dataset type

multiplebool

Indicates if this connection should expect to contain multiple objects of the given dataset type. Tasks with more than one connection with multiple=True with the same dimensions may want to implement PipelineTaskConnections.adjustQuantum to ensure those datasets are consistent (i.e. zip-iterable) in PipelineTask.runQuantum and notify the execution system as early as possible of outputs that will not be produced because the corresponding input is missing.

dimensionsiterable of str

The lsst.daf.butler.Butler lsst.daf.butler.Registry dimensions used to identify the dataset type identified by the specified name

deferLoadbool

Indicates that this dataset type will be loaded as a lsst.daf.butler.DeferredDatasetHandle. PipelineTasks can use this object to load the object at a later time.

minimumbool

Minimum number of datasets required for this connection, per quantum. This is checked in the base implementation of PipelineTaskConnections.adjustQuantum, which raises NoWorkFound if the minimum is not met for Input connections (causing the quantum to be pruned, skipped, or never created, depending on the context), and FileNotFoundError for PrerequisiteInput connections (causing QuantumGraph generation to fail). PipelineTask implementations may provide custom adjustQuantum implementations for more fine-grained or configuration-driven constraints, as long as they are compatible with this minium.

deferGraphConstraint: `bool`, optional

If True, do not include this dataset type’s existence in the initial query that starts the QuantumGraph generation process. This can be used to make QuantumGraph generation faster by avoiding redundant datasets, and in certain cases it can (along with careful attention to which tasks are included in the same QuantumGraph) be used to work around the QuantumGraph generation algorithm’s inflexible handling of spatial overlaps. This option has no effect when the connection is not an overall input of the pipeline (or subset thereof) for which a graph is being created, and it never affects the ordering of quanta.

Raises:
TypeError

Raised if minimum is greater than one but multiple=False.

NotImplementedError

Raised if minimum is zero for a regular Input connection; this is not currently supported by our QuantumGraph generation algorithm.

background_originals

Class used for declaring PipelineTask input connections

Parameters:
namestr

The default name used to identify the dataset type

storageClassstr

The storage class used when (un)/persisting the dataset type

multiplebool

Indicates if this connection should expect to contain multiple objects of the given dataset type. Tasks with more than one connection with multiple=True with the same dimensions may want to implement PipelineTaskConnections.adjustQuantum to ensure those datasets are consistent (i.e. zip-iterable) in PipelineTask.runQuantum and notify the execution system as early as possible of outputs that will not be produced because the corresponding input is missing.

dimensionsiterable of str

The lsst.daf.butler.Butler lsst.daf.butler.Registry dimensions used to identify the dataset type identified by the specified name

deferLoadbool

Indicates that this dataset type will be loaded as a lsst.daf.butler.DeferredDatasetHandle. PipelineTasks can use this object to load the object at a later time.

minimumbool

Minimum number of datasets required for this connection, per quantum. This is checked in the base implementation of PipelineTaskConnections.adjustQuantum, which raises NoWorkFound if the minimum is not met for Input connections (causing the quantum to be pruned, skipped, or never created, depending on the context), and FileNotFoundError for PrerequisiteInput connections (causing QuantumGraph generation to fail). PipelineTask implementations may provide custom adjustQuantum implementations for more fine-grained or configuration-driven constraints, as long as they are compatible with this minium.

deferGraphConstraint: `bool`, optional

If True, do not include this dataset type’s existence in the initial query that starts the QuantumGraph generation process. This can be used to make QuantumGraph generation faster by avoiding redundant datasets, and in certain cases it can (along with careful attention to which tasks are included in the same QuantumGraph) be used to work around the QuantumGraph generation algorithm’s inflexible handling of spatial overlaps. This option has no effect when the connection is not an overall input of the pipeline (or subset thereof) for which a graph is being created, and it never affects the ordering of quanta.

Raises:
TypeError

Raised if minimum is greater than one but multiple=False.

NotImplementedError

Raised if minimum is zero for a regular Input connection; this is not currently supported by our QuantumGraph generation algorithm.

background_overrides

Class used for declaring PipelineTask input connections

Parameters:
namestr

The default name used to identify the dataset type

storageClassstr

The storage class used when (un)/persisting the dataset type

multiplebool

Indicates if this connection should expect to contain multiple objects of the given dataset type. Tasks with more than one connection with multiple=True with the same dimensions may want to implement PipelineTaskConnections.adjustQuantum to ensure those datasets are consistent (i.e. zip-iterable) in PipelineTask.runQuantum and notify the execution system as early as possible of outputs that will not be produced because the corresponding input is missing.

dimensionsiterable of str

The lsst.daf.butler.Butler lsst.daf.butler.Registry dimensions used to identify the dataset type identified by the specified name

deferLoadbool

Indicates that this dataset type will be loaded as a lsst.daf.butler.DeferredDatasetHandle. PipelineTasks can use this object to load the object at a later time.

minimumbool

Minimum number of datasets required for this connection, per quantum. This is checked in the base implementation of PipelineTaskConnections.adjustQuantum, which raises NoWorkFound if the minimum is not met for Input connections (causing the quantum to be pruned, skipped, or never created, depending on the context), and FileNotFoundError for PrerequisiteInput connections (causing QuantumGraph generation to fail). PipelineTask implementations may provide custom adjustQuantum implementations for more fine-grained or configuration-driven constraints, as long as they are compatible with this minium.

deferGraphConstraint: `bool`, optional

If True, do not include this dataset type’s existence in the initial query that starts the QuantumGraph generation process. This can be used to make QuantumGraph generation faster by avoiding redundant datasets, and in certain cases it can (along with careful attention to which tasks are included in the same QuantumGraph) be used to work around the QuantumGraph generation algorithm’s inflexible handling of spatial overlaps. This option has no effect when the connection is not an overall input of the pipeline (or subset thereof) for which a graph is being created, and it never affects the ordering of quanta.

Raises:
TypeError

Raised if minimum is greater than one but multiple=False.

NotImplementedError

Raised if minimum is zero for a regular Input connection; this is not currently supported by our QuantumGraph generation algorithm.

defaultTemplates = {'photoCalibName': 'fgcm', 'skyWcsName': 'gbdesAstrometricFit'}
dimensions: set[str] = {'instrument', 'visit'}

Set of dimension names that define the unit of work for this task.

Required and implied dependencies will automatically be expanded later and need not be provided.

This may be replaced or modified in __init__ to change the dimensions of the task. After __init__ it will be a frozenset and may not be replaced.

initInputs: set[str] = frozenset({'input_summary_schema'})

Set with the names of all InitInput connection attributes.

See inputs for additional information.

initOutputs: set[str] = frozenset({'output_summary_schema'})

Set with the names of all InitOutput connection attributes.

See inputs for additional information.

input_exposures

Class used for declaring PipelineTask input connections

Parameters:
namestr

The default name used to identify the dataset type

storageClassstr

The storage class used when (un)/persisting the dataset type

multiplebool

Indicates if this connection should expect to contain multiple objects of the given dataset type. Tasks with more than one connection with multiple=True with the same dimensions may want to implement PipelineTaskConnections.adjustQuantum to ensure those datasets are consistent (i.e. zip-iterable) in PipelineTask.runQuantum and notify the execution system as early as possible of outputs that will not be produced because the corresponding input is missing.

dimensionsiterable of str

The lsst.daf.butler.Butler lsst.daf.butler.Registry dimensions used to identify the dataset type identified by the specified name

deferLoadbool

Indicates that this dataset type will be loaded as a lsst.daf.butler.DeferredDatasetHandle. PipelineTasks can use this object to load the object at a later time.

minimumbool

Minimum number of datasets required for this connection, per quantum. This is checked in the base implementation of PipelineTaskConnections.adjustQuantum, which raises NoWorkFound if the minimum is not met for Input connections (causing the quantum to be pruned, skipped, or never created, depending on the context), and FileNotFoundError for PrerequisiteInput connections (causing QuantumGraph generation to fail). PipelineTask implementations may provide custom adjustQuantum implementations for more fine-grained or configuration-driven constraints, as long as they are compatible with this minium.

deferGraphConstraint: `bool`, optional

If True, do not include this dataset type’s existence in the initial query that starts the QuantumGraph generation process. This can be used to make QuantumGraph generation faster by avoiding redundant datasets, and in certain cases it can (along with careful attention to which tasks are included in the same QuantumGraph) be used to work around the QuantumGraph generation algorithm’s inflexible handling of spatial overlaps. This option has no effect when the connection is not an overall input of the pipeline (or subset thereof) for which a graph is being created, and it never affects the ordering of quanta.

Raises:
TypeError

Raised if minimum is greater than one but multiple=False.

NotImplementedError

Raised if minimum is zero for a regular Input connection; this is not currently supported by our QuantumGraph generation algorithm.

input_summary_catalog

Class used for declaring PipelineTask input connections

Parameters:
namestr

The default name used to identify the dataset type

storageClassstr

The storage class used when (un)/persisting the dataset type

multiplebool

Indicates if this connection should expect to contain multiple objects of the given dataset type. Tasks with more than one connection with multiple=True with the same dimensions may want to implement PipelineTaskConnections.adjustQuantum to ensure those datasets are consistent (i.e. zip-iterable) in PipelineTask.runQuantum and notify the execution system as early as possible of outputs that will not be produced because the corresponding input is missing.

dimensionsiterable of str

The lsst.daf.butler.Butler lsst.daf.butler.Registry dimensions used to identify the dataset type identified by the specified name

deferLoadbool

Indicates that this dataset type will be loaded as a lsst.daf.butler.DeferredDatasetHandle. PipelineTasks can use this object to load the object at a later time.

minimumbool

Minimum number of datasets required for this connection, per quantum. This is checked in the base implementation of PipelineTaskConnections.adjustQuantum, which raises NoWorkFound if the minimum is not met for Input connections (causing the quantum to be pruned, skipped, or never created, depending on the context), and FileNotFoundError for PrerequisiteInput connections (causing QuantumGraph generation to fail). PipelineTask implementations may provide custom adjustQuantum implementations for more fine-grained or configuration-driven constraints, as long as they are compatible with this minium.

deferGraphConstraint: `bool`, optional

If True, do not include this dataset type’s existence in the initial query that starts the QuantumGraph generation process. This can be used to make QuantumGraph generation faster by avoiding redundant datasets, and in certain cases it can (along with careful attention to which tasks are included in the same QuantumGraph) be used to work around the QuantumGraph generation algorithm’s inflexible handling of spatial overlaps. This option has no effect when the connection is not an overall input of the pipeline (or subset thereof) for which a graph is being created, and it never affects the ordering of quanta.

Raises:
TypeError

Raised if minimum is greater than one but multiple=False.

NotImplementedError

Raised if minimum is zero for a regular Input connection; this is not currently supported by our QuantumGraph generation algorithm.

input_summary_schema
inputs: set[str] = frozenset({'ap_corr_overrides', 'background_originals', 'background_overrides', 'input_exposures', 'input_summary_catalog', 'photo_calib_overrides_global', 'photo_calib_overrides_tract', 'psf_overrides', 'psf_star_catalog', 'sky_map', 'wcs_overrides_global', 'wcs_overrides_tract'})

Set with the names of all connectionTypes.Input connection attributes.

This is updated automatically as class attributes are added, removed, or replaced in __init__. Removing entries from this set will cause those connections to be removed after __init__ completes, but this is supported only for backwards compatibility; new code should instead just delete the collection attributed directly. After __init__ this will be a frozenset and may not be replaced.

output_summary_catalog
output_summary_schema
outputs: set[str] = frozenset({'output_summary_catalog'})

Set with the names of all Output connection attributes.

See inputs for additional information.

photo_calib_overrides_global

Class used for declaring PipelineTask input connections

Parameters:
namestr

The default name used to identify the dataset type

storageClassstr

The storage class used when (un)/persisting the dataset type

multiplebool

Indicates if this connection should expect to contain multiple objects of the given dataset type. Tasks with more than one connection with multiple=True with the same dimensions may want to implement PipelineTaskConnections.adjustQuantum to ensure those datasets are consistent (i.e. zip-iterable) in PipelineTask.runQuantum and notify the execution system as early as possible of outputs that will not be produced because the corresponding input is missing.

dimensionsiterable of str

The lsst.daf.butler.Butler lsst.daf.butler.Registry dimensions used to identify the dataset type identified by the specified name

deferLoadbool

Indicates that this dataset type will be loaded as a lsst.daf.butler.DeferredDatasetHandle. PipelineTasks can use this object to load the object at a later time.

minimumbool

Minimum number of datasets required for this connection, per quantum. This is checked in the base implementation of PipelineTaskConnections.adjustQuantum, which raises NoWorkFound if the minimum is not met for Input connections (causing the quantum to be pruned, skipped, or never created, depending on the context), and FileNotFoundError for PrerequisiteInput connections (causing QuantumGraph generation to fail). PipelineTask implementations may provide custom adjustQuantum implementations for more fine-grained or configuration-driven constraints, as long as they are compatible with this minium.

deferGraphConstraint: `bool`, optional

If True, do not include this dataset type’s existence in the initial query that starts the QuantumGraph generation process. This can be used to make QuantumGraph generation faster by avoiding redundant datasets, and in certain cases it can (along with careful attention to which tasks are included in the same QuantumGraph) be used to work around the QuantumGraph generation algorithm’s inflexible handling of spatial overlaps. This option has no effect when the connection is not an overall input of the pipeline (or subset thereof) for which a graph is being created, and it never affects the ordering of quanta.

Raises:
TypeError

Raised if minimum is greater than one but multiple=False.

NotImplementedError

Raised if minimum is zero for a regular Input connection; this is not currently supported by our QuantumGraph generation algorithm.

photo_calib_overrides_tract

Class used for declaring PipelineTask input connections

Parameters:
namestr

The default name used to identify the dataset type

storageClassstr

The storage class used when (un)/persisting the dataset type

multiplebool

Indicates if this connection should expect to contain multiple objects of the given dataset type. Tasks with more than one connection with multiple=True with the same dimensions may want to implement PipelineTaskConnections.adjustQuantum to ensure those datasets are consistent (i.e. zip-iterable) in PipelineTask.runQuantum and notify the execution system as early as possible of outputs that will not be produced because the corresponding input is missing.

dimensionsiterable of str

The lsst.daf.butler.Butler lsst.daf.butler.Registry dimensions used to identify the dataset type identified by the specified name

deferLoadbool

Indicates that this dataset type will be loaded as a lsst.daf.butler.DeferredDatasetHandle. PipelineTasks can use this object to load the object at a later time.

minimumbool

Minimum number of datasets required for this connection, per quantum. This is checked in the base implementation of PipelineTaskConnections.adjustQuantum, which raises NoWorkFound if the minimum is not met for Input connections (causing the quantum to be pruned, skipped, or never created, depending on the context), and FileNotFoundError for PrerequisiteInput connections (causing QuantumGraph generation to fail). PipelineTask implementations may provide custom adjustQuantum implementations for more fine-grained or configuration-driven constraints, as long as they are compatible with this minium.

deferGraphConstraint: `bool`, optional

If True, do not include this dataset type’s existence in the initial query that starts the QuantumGraph generation process. This can be used to make QuantumGraph generation faster by avoiding redundant datasets, and in certain cases it can (along with careful attention to which tasks are included in the same QuantumGraph) be used to work around the QuantumGraph generation algorithm’s inflexible handling of spatial overlaps. This option has no effect when the connection is not an overall input of the pipeline (or subset thereof) for which a graph is being created, and it never affects the ordering of quanta.

Raises:
TypeError

Raised if minimum is greater than one but multiple=False.

NotImplementedError

Raised if minimum is zero for a regular Input connection; this is not currently supported by our QuantumGraph generation algorithm.

prerequisiteInputs: set[str] = frozenset({})

Set with the names of all PrerequisiteInput connection attributes.

See inputs for additional information.

psf_overrides

Class used for declaring PipelineTask input connections

Parameters:
namestr

The default name used to identify the dataset type

storageClassstr

The storage class used when (un)/persisting the dataset type

multiplebool

Indicates if this connection should expect to contain multiple objects of the given dataset type. Tasks with more than one connection with multiple=True with the same dimensions may want to implement PipelineTaskConnections.adjustQuantum to ensure those datasets are consistent (i.e. zip-iterable) in PipelineTask.runQuantum and notify the execution system as early as possible of outputs that will not be produced because the corresponding input is missing.

dimensionsiterable of str

The lsst.daf.butler.Butler lsst.daf.butler.Registry dimensions used to identify the dataset type identified by the specified name

deferLoadbool

Indicates that this dataset type will be loaded as a lsst.daf.butler.DeferredDatasetHandle. PipelineTasks can use this object to load the object at a later time.

minimumbool

Minimum number of datasets required for this connection, per quantum. This is checked in the base implementation of PipelineTaskConnections.adjustQuantum, which raises NoWorkFound if the minimum is not met for Input connections (causing the quantum to be pruned, skipped, or never created, depending on the context), and FileNotFoundError for PrerequisiteInput connections (causing QuantumGraph generation to fail). PipelineTask implementations may provide custom adjustQuantum implementations for more fine-grained or configuration-driven constraints, as long as they are compatible with this minium.

deferGraphConstraint: `bool`, optional

If True, do not include this dataset type’s existence in the initial query that starts the QuantumGraph generation process. This can be used to make QuantumGraph generation faster by avoiding redundant datasets, and in certain cases it can (along with careful attention to which tasks are included in the same QuantumGraph) be used to work around the QuantumGraph generation algorithm’s inflexible handling of spatial overlaps. This option has no effect when the connection is not an overall input of the pipeline (or subset thereof) for which a graph is being created, and it never affects the ordering of quanta.

Raises:
TypeError

Raised if minimum is greater than one but multiple=False.

NotImplementedError

Raised if minimum is zero for a regular Input connection; this is not currently supported by our QuantumGraph generation algorithm.

psf_star_catalog

Class used for declaring PipelineTask input connections

Parameters:
namestr

The default name used to identify the dataset type

storageClassstr

The storage class used when (un)/persisting the dataset type

multiplebool

Indicates if this connection should expect to contain multiple objects of the given dataset type. Tasks with more than one connection with multiple=True with the same dimensions may want to implement PipelineTaskConnections.adjustQuantum to ensure those datasets are consistent (i.e. zip-iterable) in PipelineTask.runQuantum and notify the execution system as early as possible of outputs that will not be produced because the corresponding input is missing.

dimensionsiterable of str

The lsst.daf.butler.Butler lsst.daf.butler.Registry dimensions used to identify the dataset type identified by the specified name

deferLoadbool

Indicates that this dataset type will be loaded as a lsst.daf.butler.DeferredDatasetHandle. PipelineTasks can use this object to load the object at a later time.

minimumbool

Minimum number of datasets required for this connection, per quantum. This is checked in the base implementation of PipelineTaskConnections.adjustQuantum, which raises NoWorkFound if the minimum is not met for Input connections (causing the quantum to be pruned, skipped, or never created, depending on the context), and FileNotFoundError for PrerequisiteInput connections (causing QuantumGraph generation to fail). PipelineTask implementations may provide custom adjustQuantum implementations for more fine-grained or configuration-driven constraints, as long as they are compatible with this minium.

deferGraphConstraint: `bool`, optional

If True, do not include this dataset type’s existence in the initial query that starts the QuantumGraph generation process. This can be used to make QuantumGraph generation faster by avoiding redundant datasets, and in certain cases it can (along with careful attention to which tasks are included in the same QuantumGraph) be used to work around the QuantumGraph generation algorithm’s inflexible handling of spatial overlaps. This option has no effect when the connection is not an overall input of the pipeline (or subset thereof) for which a graph is being created, and it never affects the ordering of quanta.

Raises:
TypeError

Raised if minimum is greater than one but multiple=False.

NotImplementedError

Raised if minimum is zero for a regular Input connection; this is not currently supported by our QuantumGraph generation algorithm.

sky_map

Class used for declaring PipelineTask input connections

Parameters:
namestr

The default name used to identify the dataset type

storageClassstr

The storage class used when (un)/persisting the dataset type

multiplebool

Indicates if this connection should expect to contain multiple objects of the given dataset type. Tasks with more than one connection with multiple=True with the same dimensions may want to implement PipelineTaskConnections.adjustQuantum to ensure those datasets are consistent (i.e. zip-iterable) in PipelineTask.runQuantum and notify the execution system as early as possible of outputs that will not be produced because the corresponding input is missing.

dimensionsiterable of str

The lsst.daf.butler.Butler lsst.daf.butler.Registry dimensions used to identify the dataset type identified by the specified name

deferLoadbool

Indicates that this dataset type will be loaded as a lsst.daf.butler.DeferredDatasetHandle. PipelineTasks can use this object to load the object at a later time.

minimumbool

Minimum number of datasets required for this connection, per quantum. This is checked in the base implementation of PipelineTaskConnections.adjustQuantum, which raises NoWorkFound if the minimum is not met for Input connections (causing the quantum to be pruned, skipped, or never created, depending on the context), and FileNotFoundError for PrerequisiteInput connections (causing QuantumGraph generation to fail). PipelineTask implementations may provide custom adjustQuantum implementations for more fine-grained or configuration-driven constraints, as long as they are compatible with this minium.

deferGraphConstraint: `bool`, optional

If True, do not include this dataset type’s existence in the initial query that starts the QuantumGraph generation process. This can be used to make QuantumGraph generation faster by avoiding redundant datasets, and in certain cases it can (along with careful attention to which tasks are included in the same QuantumGraph) be used to work around the QuantumGraph generation algorithm’s inflexible handling of spatial overlaps. This option has no effect when the connection is not an overall input of the pipeline (or subset thereof) for which a graph is being created, and it never affects the ordering of quanta.

Raises:
TypeError

Raised if minimum is greater than one but multiple=False.

NotImplementedError

Raised if minimum is zero for a regular Input connection; this is not currently supported by our QuantumGraph generation algorithm.

wcs_overrides_global

Class used for declaring PipelineTask input connections

Parameters:
namestr

The default name used to identify the dataset type

storageClassstr

The storage class used when (un)/persisting the dataset type

multiplebool

Indicates if this connection should expect to contain multiple objects of the given dataset type. Tasks with more than one connection with multiple=True with the same dimensions may want to implement PipelineTaskConnections.adjustQuantum to ensure those datasets are consistent (i.e. zip-iterable) in PipelineTask.runQuantum and notify the execution system as early as possible of outputs that will not be produced because the corresponding input is missing.

dimensionsiterable of str

The lsst.daf.butler.Butler lsst.daf.butler.Registry dimensions used to identify the dataset type identified by the specified name

deferLoadbool

Indicates that this dataset type will be loaded as a lsst.daf.butler.DeferredDatasetHandle. PipelineTasks can use this object to load the object at a later time.

minimumbool

Minimum number of datasets required for this connection, per quantum. This is checked in the base implementation of PipelineTaskConnections.adjustQuantum, which raises NoWorkFound if the minimum is not met for Input connections (causing the quantum to be pruned, skipped, or never created, depending on the context), and FileNotFoundError for PrerequisiteInput connections (causing QuantumGraph generation to fail). PipelineTask implementations may provide custom adjustQuantum implementations for more fine-grained or configuration-driven constraints, as long as they are compatible with this minium.

deferGraphConstraint: `bool`, optional

If True, do not include this dataset type’s existence in the initial query that starts the QuantumGraph generation process. This can be used to make QuantumGraph generation faster by avoiding redundant datasets, and in certain cases it can (along with careful attention to which tasks are included in the same QuantumGraph) be used to work around the QuantumGraph generation algorithm’s inflexible handling of spatial overlaps. This option has no effect when the connection is not an overall input of the pipeline (or subset thereof) for which a graph is being created, and it never affects the ordering of quanta.

Raises:
TypeError

Raised if minimum is greater than one but multiple=False.

NotImplementedError

Raised if minimum is zero for a regular Input connection; this is not currently supported by our QuantumGraph generation algorithm.

wcs_overrides_tract

Class used for declaring PipelineTask input connections

Parameters:
namestr

The default name used to identify the dataset type

storageClassstr

The storage class used when (un)/persisting the dataset type

multiplebool

Indicates if this connection should expect to contain multiple objects of the given dataset type. Tasks with more than one connection with multiple=True with the same dimensions may want to implement PipelineTaskConnections.adjustQuantum to ensure those datasets are consistent (i.e. zip-iterable) in PipelineTask.runQuantum and notify the execution system as early as possible of outputs that will not be produced because the corresponding input is missing.

dimensionsiterable of str

The lsst.daf.butler.Butler lsst.daf.butler.Registry dimensions used to identify the dataset type identified by the specified name

deferLoadbool

Indicates that this dataset type will be loaded as a lsst.daf.butler.DeferredDatasetHandle. PipelineTasks can use this object to load the object at a later time.

minimumbool

Minimum number of datasets required for this connection, per quantum. This is checked in the base implementation of PipelineTaskConnections.adjustQuantum, which raises NoWorkFound if the minimum is not met for Input connections (causing the quantum to be pruned, skipped, or never created, depending on the context), and FileNotFoundError for PrerequisiteInput connections (causing QuantumGraph generation to fail). PipelineTask implementations may provide custom adjustQuantum implementations for more fine-grained or configuration-driven constraints, as long as they are compatible with this minium.

deferGraphConstraint: `bool`, optional

If True, do not include this dataset type’s existence in the initial query that starts the QuantumGraph generation process. This can be used to make QuantumGraph generation faster by avoiding redundant datasets, and in certain cases it can (along with careful attention to which tasks are included in the same QuantumGraph) be used to work around the QuantumGraph generation algorithm’s inflexible handling of spatial overlaps. This option has no effect when the connection is not an overall input of the pipeline (or subset thereof) for which a graph is being created, and it never affects the ordering of quanta.

Raises:
TypeError

Raised if minimum is greater than one but multiple=False.

NotImplementedError

Raised if minimum is zero for a regular Input connection; this is not currently supported by our QuantumGraph generation algorithm.

Methods Documentation

adjustQuantum(inputs: dict[str, tuple[lsst.pipe.base.connectionTypes.BaseInput, collections.abc.Collection[lsst.daf.butler.core.datasets.ref.DatasetRef]]], outputs: dict[str, tuple[lsst.pipe.base.connectionTypes.Output, collections.abc.Collection[lsst.daf.butler.core.datasets.ref.DatasetRef]]], label: str, data_id: DataCoordinate) tuple[collections.abc.Mapping[str, tuple[lsst.pipe.base.connectionTypes.BaseInput, collections.abc.Collection[lsst.daf.butler.core.datasets.ref.DatasetRef]]], collections.abc.Mapping[str, tuple[lsst.pipe.base.connectionTypes.Output, collections.abc.Collection[lsst.daf.butler.core.datasets.ref.DatasetRef]]]]

Override to make adjustments to lsst.daf.butler.DatasetRef objects in the lsst.daf.butler.core.Quantum during the graph generation stage of the activator.

Parameters:
inputsdict

Dictionary whose keys are an input (regular or prerequisite) connection name and whose values are a tuple of the connection instance and a collection of associated DatasetRef objects. The exact type of the nested collections is unspecified; it can be assumed to be multi-pass iterable and support len and in, but it should not be mutated in place. In contrast, the outer dictionaries are guaranteed to be temporary copies that are true dict instances, and hence may be modified and even returned; this is especially useful for delegating to super (see notes below).

outputsMapping

Mapping of output datasets, with the same structure as inputs.

labelstr

Label for this task in the pipeline (should be used in all diagnostic messages).

data_idlsst.daf.butler.DataCoordinate

Data ID for this quantum in the pipeline (should be used in all diagnostic messages).

Returns:
adjusted_inputsMapping

Mapping of the same form as inputs with updated containers of input DatasetRef objects. Connections that are not changed should not be returned at all. Datasets may only be removed, not added. Nested collections may be of any multi-pass iterable type, and the order of iteration will set the order of iteration within PipelineTask.runQuantum.

adjusted_outputsMapping

Mapping of updated output datasets, with the same structure and interpretation as adjusted_inputs.

Raises:
ScalarError

Raised if any Input or PrerequisiteInput connection has multiple set to False, but multiple datasets.

NoWorkFound

Raised to indicate that this quantum should not be run; not enough datasets were found for a regular Input connection, and the quantum should be pruned or skipped.

FileNotFoundError

Raised to cause QuantumGraph generation to fail (with the message included in this exception); not enough datasets were found for a PrerequisiteInput connection.

Notes

The base class implementation performs important checks. It always returns an empty mapping (i.e. makes no adjustments). It should always called be via super by custom implementations, ideally at the end of the custom implementation with already-adjusted mappings when any datasets are actually dropped, e.g.:

def adjustQuantum(self, inputs, outputs, label, data_id):
    # Filter out some dataset refs for one connection.
    connection, old_refs = inputs["my_input"]
    new_refs = [ref for ref in old_refs if ...]
    adjusted_inputs = {"my_input", (connection, new_refs)}
    # Update the original inputs so we can pass them to super.
    inputs.update(adjusted_inputs)
    # Can ignore outputs from super because they are guaranteed
    # to be empty.
    super().adjustQuantum(inputs, outputs, label_data_id)
    # Return only the connections we modified.
    return adjusted_inputs, {}

Removing outputs here is guaranteed to affect what is actually passed to PipelineTask.runQuantum, but its effect on the larger graph may be deferred to execution, depending on the context in which adjustQuantum is being run: if one quantum removes an output that is needed by a second quantum as input, the second quantum may not be adjusted (and hence pruned or skipped) until that output is actually found to be missing at execution time.

Tasks that desire zip-iteration consistency between any combinations of connections that have the same data ID should generally implement adjustQuantum to achieve this, even if they could also run that logic during execution; this allows the system to see outputs that will not be produced because the corresponding input is missing as early as possible.

buildDatasetRefs(quantum: Quantum) tuple[lsst.pipe.base.connections.InputQuantizedConnection, lsst.pipe.base.connections.OutputQuantizedConnection]

Builds QuantizedConnections corresponding to input Quantum

Parameters:
quantumlsst.daf.butler.Quantum

Quantum object which defines the inputs and outputs for a given unit of processing

Returns:
retValtuple of (InputQuantizedConnection,

OutputQuantizedConnection) Namespaces mapping attribute names (identifiers of connections) to butler references defined in the input lsst.daf.butler.Quantum