HealSparsePropertyMapConnections

class lsst.pipe.tasks.healSparseMapping.HealSparsePropertyMapConnections(*, config=None)

Bases: PipelineTaskConnections

Attributes Summary

allConnections

coadd_exposures

Class used for declaring PipelineTask input connections

dcr_ddec_map_max

dcr_ddec_map_mean

dcr_ddec_map_min

dcr_ddec_map_sum

dcr_ddec_map_weighted_mean

dcr_dra_map_max

dcr_dra_map_mean

dcr_dra_map_min

dcr_dra_map_sum

dcr_dra_map_weighted_mean

dcr_e1_map_max

dcr_e1_map_mean

dcr_e1_map_min

dcr_e1_map_sum

dcr_e1_map_weighted_mean

dcr_e2_map_max

dcr_e2_map_mean

dcr_e2_map_min

dcr_e2_map_sum

dcr_e2_map_weighted_mean

defaultTemplates

dimensions

epoch_map_max

epoch_map_mean

epoch_map_min

epoch_map_sum

epoch_map_weighted_mean

exposure_time_map_max

exposure_time_map_mean

exposure_time_map_min

exposure_time_map_sum

exposure_time_map_weighted_mean

initInputs

initOutputs

input_maps

Class used for declaring PipelineTask input connections

inputs

n_exposure_map_max

n_exposure_map_mean

n_exposure_map_min

n_exposure_map_sum

n_exposure_map_weighted_mean

name

outputs

prerequisiteInputs

psf_e1_map_max

psf_e1_map_mean

psf_e1_map_min

psf_e1_map_sum

psf_e1_map_weighted_mean

psf_e2_map_max

psf_e2_map_mean

psf_e2_map_min

psf_e2_map_sum

psf_e2_map_weighted_mean

psf_maglim_map_max

psf_maglim_map_mean

psf_maglim_map_min

psf_maglim_map_sum

psf_maglim_map_weighted_mean

psf_size_map_max

psf_size_map_mean

psf_size_map_min

psf_size_map_sum

psf_size_map_weighted_mean

sky_background_map_max

sky_background_map_mean

sky_background_map_min

sky_background_map_sum

sky_background_map_weighted_mean

sky_map

Class used for declaring PipelineTask input connections

sky_noise_map_max

sky_noise_map_mean

sky_noise_map_min

sky_noise_map_sum

sky_noise_map_weighted_mean

visit_summaries

Class used for declaring PipelineTask input connections

Methods Summary

adjustQuantum(inputs, outputs, label, data_id)

Override to make adjustments to lsst.daf.butler.DatasetRef objects in the lsst.daf.butler.core.Quantum during the graph generation stage of the activator.

buildDatasetRefs(quantum)

Builds QuantizedConnections corresponding to input Quantum

Attributes Documentation

allConnections: Dict[str, BaseConnection] = {'coadd_exposures': Input(name='{coaddName}Coadd', storageClass='ExposureF', doc='Coadded exposures associated with input_maps', multiple=True, dimensions=('tract', 'patch', 'skymap', 'band'), isCalibration=False, deferLoad=True, minimum=1, deferGraphConstraint=False), 'dcr_ddec_map_max': Output(name='{coaddName}Coadd_dcr_ddec_map_max', storageClass='HealSparseMap', doc='Maximum-value map of dcr_ddec', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'dcr_ddec_map_mean': Output(name='{coaddName}Coadd_dcr_ddec_map_mean', storageClass='HealSparseMap', doc='Mean-value map of dcr_ddec', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'dcr_ddec_map_min': Output(name='{coaddName}Coadd_dcr_ddec_map_min', storageClass='HealSparseMap', doc='Minimum-value map of dcr_ddec', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'dcr_ddec_map_sum': Output(name='{coaddName}Coadd_dcr_ddec_map_sum', storageClass='HealSparseMap', doc='Sum-value map of dcr_ddec', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'dcr_ddec_map_weighted_mean': Output(name='{coaddName}Coadd_dcr_ddec_map_weighted_mean', storageClass='HealSparseMap', doc='Weighted mean-value map of dcr_ddec', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'dcr_dra_map_max': Output(name='{coaddName}Coadd_dcr_dra_map_max', storageClass='HealSparseMap', doc='Maximum-value map of dcr_dra', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'dcr_dra_map_mean': Output(name='{coaddName}Coadd_dcr_dra_map_mean', storageClass='HealSparseMap', doc='Mean-value map of dcr_dra', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'dcr_dra_map_min': Output(name='{coaddName}Coadd_dcr_dra_map_min', storageClass='HealSparseMap', doc='Minimum-value map of dcr_dra', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'dcr_dra_map_sum': Output(name='{coaddName}Coadd_dcr_dra_map_sum', storageClass='HealSparseMap', doc='Sum-value map of dcr_dra', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'dcr_dra_map_weighted_mean': Output(name='{coaddName}Coadd_dcr_dra_map_weighted_mean', storageClass='HealSparseMap', doc='Weighted mean-value map of dcr_dra', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'dcr_e1_map_max': Output(name='{coaddName}Coadd_dcr_e1_map_max', storageClass='HealSparseMap', doc='Maximum-value map of dcr_e1', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'dcr_e1_map_mean': Output(name='{coaddName}Coadd_dcr_e1_map_mean', storageClass='HealSparseMap', doc='Mean-value map of dcr_e1', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'dcr_e1_map_min': Output(name='{coaddName}Coadd_dcr_e1_map_min', storageClass='HealSparseMap', doc='Minimum-value map of dcr_e1', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'dcr_e1_map_sum': Output(name='{coaddName}Coadd_dcr_e1_map_sum', storageClass='HealSparseMap', doc='Sum-value map of dcr_e1', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'dcr_e1_map_weighted_mean': Output(name='{coaddName}Coadd_dcr_e1_map_weighted_mean', storageClass='HealSparseMap', doc='Weighted mean-value map of dcr_e1', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'dcr_e2_map_max': Output(name='{coaddName}Coadd_dcr_e2_map_max', storageClass='HealSparseMap', doc='Maximum-value map of dcr_e2', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'dcr_e2_map_mean': Output(name='{coaddName}Coadd_dcr_e2_map_mean', storageClass='HealSparseMap', doc='Mean-value map of dcr_e2', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'dcr_e2_map_min': Output(name='{coaddName}Coadd_dcr_e2_map_min', storageClass='HealSparseMap', doc='Minimum-value map of dcr_e2', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'dcr_e2_map_sum': Output(name='{coaddName}Coadd_dcr_e2_map_sum', storageClass='HealSparseMap', doc='Sum-value map of dcr_e2', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'dcr_e2_map_weighted_mean': Output(name='{coaddName}Coadd_dcr_e2_map_weighted_mean', storageClass='HealSparseMap', doc='Weighted mean-value map of dcr_e2', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'epoch_map_max': Output(name='{coaddName}Coadd_epoch_map_max', storageClass='HealSparseMap', doc='Maximum-value map of epoch', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'epoch_map_mean': Output(name='{coaddName}Coadd_epoch_map_mean', storageClass='HealSparseMap', doc='Mean-value map of epoch', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'epoch_map_min': Output(name='{coaddName}Coadd_epoch_map_min', storageClass='HealSparseMap', doc='Minimum-value map of epoch', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'epoch_map_sum': Output(name='{coaddName}Coadd_epoch_map_sum', storageClass='HealSparseMap', doc='Sum-value map of epoch', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'epoch_map_weighted_mean': Output(name='{coaddName}Coadd_epoch_map_weighted_mean', storageClass='HealSparseMap', doc='Weighted mean-value map of epoch', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'exposure_time_map_max': Output(name='{coaddName}Coadd_exposure_time_map_max', storageClass='HealSparseMap', doc='Maximum-value map of exposure_time', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'exposure_time_map_mean': Output(name='{coaddName}Coadd_exposure_time_map_mean', storageClass='HealSparseMap', doc='Mean-value map of exposure_time', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'exposure_time_map_min': Output(name='{coaddName}Coadd_exposure_time_map_min', storageClass='HealSparseMap', doc='Minimum-value map of exposure_time', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'exposure_time_map_sum': Output(name='{coaddName}Coadd_exposure_time_map_sum', storageClass='HealSparseMap', doc='Sum-value map of exposure_time', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'exposure_time_map_weighted_mean': Output(name='{coaddName}Coadd_exposure_time_map_weighted_mean', storageClass='HealSparseMap', doc='Weighted mean-value map of exposure_time', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'input_maps': Input(name='{coaddName}Coadd_inputMap', storageClass='HealSparseMap', doc='Healsparse bit-wise coadd input maps', multiple=True, dimensions=('tract', 'patch', 'skymap', 'band'), isCalibration=False, deferLoad=True, minimum=1, deferGraphConstraint=False), 'n_exposure_map_max': Output(name='{coaddName}Coadd_n_exposure_map_max', storageClass='HealSparseMap', doc='Maximum-value map of n_exposure', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'n_exposure_map_mean': Output(name='{coaddName}Coadd_n_exposure_map_mean', storageClass='HealSparseMap', doc='Mean-value map of n_exposure', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'n_exposure_map_min': Output(name='{coaddName}Coadd_n_exposure_map_min', storageClass='HealSparseMap', doc='Minimum-value map of n_exposure', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'n_exposure_map_sum': Output(name='{coaddName}Coadd_n_exposure_map_sum', storageClass='HealSparseMap', doc='Sum-value map of n_exposure', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'n_exposure_map_weighted_mean': Output(name='{coaddName}Coadd_n_exposure_map_weighted_mean', storageClass='HealSparseMap', doc='Weighted mean-value map of n_exposure', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'psf_e1_map_max': Output(name='{coaddName}Coadd_psf_e1_map_max', storageClass='HealSparseMap', doc='Maximum-value map of psf_e1', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'psf_e1_map_mean': Output(name='{coaddName}Coadd_psf_e1_map_mean', storageClass='HealSparseMap', doc='Mean-value map of psf_e1', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'psf_e1_map_min': Output(name='{coaddName}Coadd_psf_e1_map_min', storageClass='HealSparseMap', doc='Minimum-value map of psf_e1', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'psf_e1_map_sum': Output(name='{coaddName}Coadd_psf_e1_map_sum', storageClass='HealSparseMap', doc='Sum-value map of psf_e1', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'psf_e1_map_weighted_mean': Output(name='{coaddName}Coadd_psf_e1_map_weighted_mean', storageClass='HealSparseMap', doc='Weighted mean-value map of psf_e1', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'psf_e2_map_max': Output(name='{coaddName}Coadd_psf_e2_map_max', storageClass='HealSparseMap', doc='Maximum-value map of psf_e2', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'psf_e2_map_mean': Output(name='{coaddName}Coadd_psf_e2_map_mean', storageClass='HealSparseMap', doc='Mean-value map of psf_e2', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'psf_e2_map_min': Output(name='{coaddName}Coadd_psf_e2_map_min', storageClass='HealSparseMap', doc='Minimum-value map of psf_e2', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'psf_e2_map_sum': Output(name='{coaddName}Coadd_psf_e2_map_sum', storageClass='HealSparseMap', doc='Sum-value map of psf_e2', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'psf_e2_map_weighted_mean': Output(name='{coaddName}Coadd_psf_e2_map_weighted_mean', storageClass='HealSparseMap', doc='Weighted mean-value map of psf_e2', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'psf_maglim_map_max': Output(name='{coaddName}Coadd_psf_maglim_map_max', storageClass='HealSparseMap', doc='Maximum-value map of psf_maglim', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'psf_maglim_map_mean': Output(name='{coaddName}Coadd_psf_maglim_map_mean', storageClass='HealSparseMap', doc='Mean-value map of psf_maglim', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'psf_maglim_map_min': Output(name='{coaddName}Coadd_psf_maglim_map_min', storageClass='HealSparseMap', doc='Minimum-value map of psf_maglim', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'psf_maglim_map_sum': Output(name='{coaddName}Coadd_psf_maglim_map_sum', storageClass='HealSparseMap', doc='Sum-value map of psf_maglim', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'psf_maglim_map_weighted_mean': Output(name='{coaddName}Coadd_psf_maglim_map_weighted_mean', storageClass='HealSparseMap', doc='Weighted mean-value map of psf_maglim', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'psf_size_map_max': Output(name='{coaddName}Coadd_psf_size_map_max', storageClass='HealSparseMap', doc='Maximum-value map of psf_size', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'psf_size_map_mean': Output(name='{coaddName}Coadd_psf_size_map_mean', storageClass='HealSparseMap', doc='Mean-value map of psf_size', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'psf_size_map_min': Output(name='{coaddName}Coadd_psf_size_map_min', storageClass='HealSparseMap', doc='Minimum-value map of psf_size', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'psf_size_map_sum': Output(name='{coaddName}Coadd_psf_size_map_sum', storageClass='HealSparseMap', doc='Sum-value map of psf_size', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'psf_size_map_weighted_mean': Output(name='{coaddName}Coadd_psf_size_map_weighted_mean', storageClass='HealSparseMap', doc='Weighted mean-value map of psf_size', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'sky_background_map_max': Output(name='{coaddName}Coadd_sky_background_map_max', storageClass='HealSparseMap', doc='Maximum-value map of sky_background', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'sky_background_map_mean': Output(name='{coaddName}Coadd_sky_background_map_mean', storageClass='HealSparseMap', doc='Mean-value map of sky_background', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'sky_background_map_min': Output(name='{coaddName}Coadd_sky_background_map_min', storageClass='HealSparseMap', doc='Minimum-value map of sky_background', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'sky_background_map_sum': Output(name='{coaddName}Coadd_sky_background_map_sum', storageClass='HealSparseMap', doc='Sum-value map of sky_background', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'sky_background_map_weighted_mean': Output(name='{coaddName}Coadd_sky_background_map_weighted_mean', storageClass='HealSparseMap', doc='Weighted mean-value map of sky_background', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'sky_map': Input(name='skyMap', storageClass='SkyMap', doc='Input definition of geometry/bbox and projection/wcs for coadded exposures', multiple=False, dimensions=('skymap',), isCalibration=False, deferLoad=False, minimum=1, deferGraphConstraint=False), 'sky_noise_map_max': Output(name='{coaddName}Coadd_sky_noise_map_max', storageClass='HealSparseMap', doc='Maximum-value map of sky_noise', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'sky_noise_map_mean': Output(name='{coaddName}Coadd_sky_noise_map_mean', storageClass='HealSparseMap', doc='Mean-value map of sky_noise', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'sky_noise_map_min': Output(name='{coaddName}Coadd_sky_noise_map_min', storageClass='HealSparseMap', doc='Minimum-value map of sky_noise', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'sky_noise_map_sum': Output(name='{coaddName}Coadd_sky_noise_map_sum', storageClass='HealSparseMap', doc='Sum-value map of sky_noise', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'sky_noise_map_weighted_mean': Output(name='{coaddName}Coadd_sky_noise_map_weighted_mean', storageClass='HealSparseMap', doc='Weighted mean-value map of sky_noise', multiple=False, dimensions=('tract', 'skymap', 'band'), isCalibration=False), 'visit_summaries': Input(name='finalVisitSummary', storageClass='ExposureCatalog', doc='Visit summary tables with aggregated statistics', multiple=True, dimensions=('instrument', 'visit'), isCalibration=False, deferLoad=True, minimum=1, deferGraphConstraint=False)}
coadd_exposures

Class used for declaring PipelineTask input connections

Parameters:
namestr

The default name used to identify the dataset type

storageClassstr

The storage class used when (un)/persisting the dataset type

multiplebool

Indicates if this connection should expect to contain multiple objects of the given dataset type. Tasks with more than one connection with multiple=True with the same dimensions may want to implement PipelineTaskConnections.adjustQuantum to ensure those datasets are consistent (i.e. zip-iterable) in PipelineTask.runQuantum and notify the execution system as early as possible of outputs that will not be produced because the corresponding input is missing.

dimensionsiterable of str

The lsst.daf.butler.Butler lsst.daf.butler.Registry dimensions used to identify the dataset type identified by the specified name

deferLoadbool

Indicates that this dataset type will be loaded as a lsst.daf.butler.DeferredDatasetHandle. PipelineTasks can use this object to load the object at a later time.

minimumbool

Minimum number of datasets required for this connection, per quantum. This is checked in the base implementation of PipelineTaskConnections.adjustQuantum, which raises NoWorkFound if the minimum is not met for Input connections (causing the quantum to be pruned, skipped, or never created, depending on the context), and FileNotFoundError for PrerequisiteInput connections (causing QuantumGraph generation to fail). PipelineTask implementations may provide custom adjustQuantum implementations for more fine-grained or configuration-driven constraints, as long as they are compatible with this minium.

deferGraphConstraint: `bool`, optional

If True, do not include this dataset type’s existence in the initial query that starts the QuantumGraph generation process. This can be used to make QuantumGraph generation faster by avoiding redundant datasets, and in certain cases it can (along with careful attention to which tasks are included in the same QuantumGraph) be used to work around the QuantumGraph generation algorithm’s inflexible handling of spatial overlaps. This option has no effect when the connection is not an overall input of the pipeline (or subset thereof) for which a graph is being created, and it never affects the ordering of quanta.

Raises:
TypeError

Raised if minimum is greater than one but multiple=False.

NotImplementedError

Raised if minimum is zero for a regular Input connection; this is not currently supported by our QuantumGraph generation algorithm.

dcr_ddec_map_max
dcr_ddec_map_mean
dcr_ddec_map_min
dcr_ddec_map_sum
dcr_ddec_map_weighted_mean
dcr_dra_map_max
dcr_dra_map_mean
dcr_dra_map_min
dcr_dra_map_sum
dcr_dra_map_weighted_mean
dcr_e1_map_max
dcr_e1_map_mean
dcr_e1_map_min
dcr_e1_map_sum
dcr_e1_map_weighted_mean
dcr_e2_map_max
dcr_e2_map_mean
dcr_e2_map_min
dcr_e2_map_sum
dcr_e2_map_weighted_mean
defaultTemplates = {'calexpType': '', 'coaddName': 'deep'}
dimensions: ClassVar[Set[str]] = {'band', 'skymap', 'tract'}
epoch_map_max
epoch_map_mean
epoch_map_min
epoch_map_sum
epoch_map_weighted_mean
exposure_time_map_max
exposure_time_map_mean
exposure_time_map_min
exposure_time_map_sum
exposure_time_map_weighted_mean
initInputs: Set[str] = frozenset({})
initOutputs: Set[str] = frozenset({})
input_maps

Class used for declaring PipelineTask input connections

Parameters:
namestr

The default name used to identify the dataset type

storageClassstr

The storage class used when (un)/persisting the dataset type

multiplebool

Indicates if this connection should expect to contain multiple objects of the given dataset type. Tasks with more than one connection with multiple=True with the same dimensions may want to implement PipelineTaskConnections.adjustQuantum to ensure those datasets are consistent (i.e. zip-iterable) in PipelineTask.runQuantum and notify the execution system as early as possible of outputs that will not be produced because the corresponding input is missing.

dimensionsiterable of str

The lsst.daf.butler.Butler lsst.daf.butler.Registry dimensions used to identify the dataset type identified by the specified name

deferLoadbool

Indicates that this dataset type will be loaded as a lsst.daf.butler.DeferredDatasetHandle. PipelineTasks can use this object to load the object at a later time.

minimumbool

Minimum number of datasets required for this connection, per quantum. This is checked in the base implementation of PipelineTaskConnections.adjustQuantum, which raises NoWorkFound if the minimum is not met for Input connections (causing the quantum to be pruned, skipped, or never created, depending on the context), and FileNotFoundError for PrerequisiteInput connections (causing QuantumGraph generation to fail). PipelineTask implementations may provide custom adjustQuantum implementations for more fine-grained or configuration-driven constraints, as long as they are compatible with this minium.

deferGraphConstraint: `bool`, optional

If True, do not include this dataset type’s existence in the initial query that starts the QuantumGraph generation process. This can be used to make QuantumGraph generation faster by avoiding redundant datasets, and in certain cases it can (along with careful attention to which tasks are included in the same QuantumGraph) be used to work around the QuantumGraph generation algorithm’s inflexible handling of spatial overlaps. This option has no effect when the connection is not an overall input of the pipeline (or subset thereof) for which a graph is being created, and it never affects the ordering of quanta.

Raises:
TypeError

Raised if minimum is greater than one but multiple=False.

NotImplementedError

Raised if minimum is zero for a regular Input connection; this is not currently supported by our QuantumGraph generation algorithm.

inputs: Set[str] = frozenset({'coadd_exposures', 'input_maps', 'sky_map', 'visit_summaries'})
n_exposure_map_max
n_exposure_map_mean
n_exposure_map_min
n_exposure_map_sum
n_exposure_map_weighted_mean
name = 'epoch'
outputs: Set[str] = frozenset({'dcr_ddec_map_max', 'dcr_ddec_map_mean', 'dcr_ddec_map_min', 'dcr_ddec_map_sum', 'dcr_ddec_map_weighted_mean', 'dcr_dra_map_max', 'dcr_dra_map_mean', 'dcr_dra_map_min', 'dcr_dra_map_sum', 'dcr_dra_map_weighted_mean', 'dcr_e1_map_max', 'dcr_e1_map_mean', 'dcr_e1_map_min', 'dcr_e1_map_sum', 'dcr_e1_map_weighted_mean', 'dcr_e2_map_max', 'dcr_e2_map_mean', 'dcr_e2_map_min', 'dcr_e2_map_sum', 'dcr_e2_map_weighted_mean', 'epoch_map_max', 'epoch_map_mean', 'epoch_map_min', 'epoch_map_sum', 'epoch_map_weighted_mean', 'exposure_time_map_max', 'exposure_time_map_mean', 'exposure_time_map_min', 'exposure_time_map_sum', 'exposure_time_map_weighted_mean', 'n_exposure_map_max', 'n_exposure_map_mean', 'n_exposure_map_min', 'n_exposure_map_sum', 'n_exposure_map_weighted_mean', 'psf_e1_map_max', 'psf_e1_map_mean', 'psf_e1_map_min', 'psf_e1_map_sum', 'psf_e1_map_weighted_mean', 'psf_e2_map_max', 'psf_e2_map_mean', 'psf_e2_map_min', 'psf_e2_map_sum', 'psf_e2_map_weighted_mean', 'psf_maglim_map_max', 'psf_maglim_map_mean', 'psf_maglim_map_min', 'psf_maglim_map_sum', 'psf_maglim_map_weighted_mean', 'psf_size_map_max', 'psf_size_map_mean', 'psf_size_map_min', 'psf_size_map_sum', 'psf_size_map_weighted_mean', 'sky_background_map_max', 'sky_background_map_mean', 'sky_background_map_min', 'sky_background_map_sum', 'sky_background_map_weighted_mean', 'sky_noise_map_max', 'sky_noise_map_mean', 'sky_noise_map_min', 'sky_noise_map_sum', 'sky_noise_map_weighted_mean'})
prerequisiteInputs: Set[str] = frozenset({})
psf_e1_map_max
psf_e1_map_mean
psf_e1_map_min
psf_e1_map_sum
psf_e1_map_weighted_mean
psf_e2_map_max
psf_e2_map_mean
psf_e2_map_min
psf_e2_map_sum
psf_e2_map_weighted_mean
psf_maglim_map_max
psf_maglim_map_mean
psf_maglim_map_min
psf_maglim_map_sum
psf_maglim_map_weighted_mean
psf_size_map_max
psf_size_map_mean
psf_size_map_min
psf_size_map_sum
psf_size_map_weighted_mean
sky_background_map_max
sky_background_map_mean
sky_background_map_min
sky_background_map_sum
sky_background_map_weighted_mean
sky_map

Class used for declaring PipelineTask input connections

Parameters:
namestr

The default name used to identify the dataset type

storageClassstr

The storage class used when (un)/persisting the dataset type

multiplebool

Indicates if this connection should expect to contain multiple objects of the given dataset type. Tasks with more than one connection with multiple=True with the same dimensions may want to implement PipelineTaskConnections.adjustQuantum to ensure those datasets are consistent (i.e. zip-iterable) in PipelineTask.runQuantum and notify the execution system as early as possible of outputs that will not be produced because the corresponding input is missing.

dimensionsiterable of str

The lsst.daf.butler.Butler lsst.daf.butler.Registry dimensions used to identify the dataset type identified by the specified name

deferLoadbool

Indicates that this dataset type will be loaded as a lsst.daf.butler.DeferredDatasetHandle. PipelineTasks can use this object to load the object at a later time.

minimumbool

Minimum number of datasets required for this connection, per quantum. This is checked in the base implementation of PipelineTaskConnections.adjustQuantum, which raises NoWorkFound if the minimum is not met for Input connections (causing the quantum to be pruned, skipped, or never created, depending on the context), and FileNotFoundError for PrerequisiteInput connections (causing QuantumGraph generation to fail). PipelineTask implementations may provide custom adjustQuantum implementations for more fine-grained or configuration-driven constraints, as long as they are compatible with this minium.

deferGraphConstraint: `bool`, optional

If True, do not include this dataset type’s existence in the initial query that starts the QuantumGraph generation process. This can be used to make QuantumGraph generation faster by avoiding redundant datasets, and in certain cases it can (along with careful attention to which tasks are included in the same QuantumGraph) be used to work around the QuantumGraph generation algorithm’s inflexible handling of spatial overlaps. This option has no effect when the connection is not an overall input of the pipeline (or subset thereof) for which a graph is being created, and it never affects the ordering of quanta.

Raises:
TypeError

Raised if minimum is greater than one but multiple=False.

NotImplementedError

Raised if minimum is zero for a regular Input connection; this is not currently supported by our QuantumGraph generation algorithm.

sky_noise_map_max
sky_noise_map_mean
sky_noise_map_min
sky_noise_map_sum
sky_noise_map_weighted_mean
visit_summaries

Class used for declaring PipelineTask input connections

Parameters:
namestr

The default name used to identify the dataset type

storageClassstr

The storage class used when (un)/persisting the dataset type

multiplebool

Indicates if this connection should expect to contain multiple objects of the given dataset type. Tasks with more than one connection with multiple=True with the same dimensions may want to implement PipelineTaskConnections.adjustQuantum to ensure those datasets are consistent (i.e. zip-iterable) in PipelineTask.runQuantum and notify the execution system as early as possible of outputs that will not be produced because the corresponding input is missing.

dimensionsiterable of str

The lsst.daf.butler.Butler lsst.daf.butler.Registry dimensions used to identify the dataset type identified by the specified name

deferLoadbool

Indicates that this dataset type will be loaded as a lsst.daf.butler.DeferredDatasetHandle. PipelineTasks can use this object to load the object at a later time.

minimumbool

Minimum number of datasets required for this connection, per quantum. This is checked in the base implementation of PipelineTaskConnections.adjustQuantum, which raises NoWorkFound if the minimum is not met for Input connections (causing the quantum to be pruned, skipped, or never created, depending on the context), and FileNotFoundError for PrerequisiteInput connections (causing QuantumGraph generation to fail). PipelineTask implementations may provide custom adjustQuantum implementations for more fine-grained or configuration-driven constraints, as long as they are compatible with this minium.

deferGraphConstraint: `bool`, optional

If True, do not include this dataset type’s existence in the initial query that starts the QuantumGraph generation process. This can be used to make QuantumGraph generation faster by avoiding redundant datasets, and in certain cases it can (along with careful attention to which tasks are included in the same QuantumGraph) be used to work around the QuantumGraph generation algorithm’s inflexible handling of spatial overlaps. This option has no effect when the connection is not an overall input of the pipeline (or subset thereof) for which a graph is being created, and it never affects the ordering of quanta.

Raises:
TypeError

Raised if minimum is greater than one but multiple=False.

NotImplementedError

Raised if minimum is zero for a regular Input connection; this is not currently supported by our QuantumGraph generation algorithm.

Methods Documentation

adjustQuantum(inputs: Dict[str, Tuple[BaseInput, Collection[DatasetRef]]], outputs: Dict[str, Tuple[Output, Collection[DatasetRef]]], label: str, data_id: DataCoordinate) Tuple[Mapping[str, Tuple[BaseInput, Collection[DatasetRef]]], Mapping[str, Tuple[Output, Collection[DatasetRef]]]]

Override to make adjustments to lsst.daf.butler.DatasetRef objects in the lsst.daf.butler.core.Quantum during the graph generation stage of the activator.

Parameters:
inputsdict

Dictionary whose keys are an input (regular or prerequisite) connection name and whose values are a tuple of the connection instance and a collection of associated DatasetRef objects. The exact type of the nested collections is unspecified; it can be assumed to be multi-pass iterable and support len and in, but it should not be mutated in place. In contrast, the outer dictionaries are guaranteed to be temporary copies that are true dict instances, and hence may be modified and even returned; this is especially useful for delegating to super (see notes below).

outputsMapping

Mapping of output datasets, with the same structure as inputs.

labelstr

Label for this task in the pipeline (should be used in all diagnostic messages).

data_idlsst.daf.butler.DataCoordinate

Data ID for this quantum in the pipeline (should be used in all diagnostic messages).

Returns:
adjusted_inputsMapping

Mapping of the same form as inputs with updated containers of input DatasetRef objects. Connections that are not changed should not be returned at all. Datasets may only be removed, not added. Nested collections may be of any multi-pass iterable type, and the order of iteration will set the order of iteration within PipelineTask.runQuantum.

adjusted_outputsMapping

Mapping of updated output datasets, with the same structure and interpretation as adjusted_inputs.

Raises:
ScalarError

Raised if any Input or PrerequisiteInput connection has multiple set to False, but multiple datasets.

NoWorkFound

Raised to indicate that this quantum should not be run; not enough datasets were found for a regular Input connection, and the quantum should be pruned or skipped.

FileNotFoundError

Raised to cause QuantumGraph generation to fail (with the message included in this exception); not enough datasets were found for a PrerequisiteInput connection.

Notes

The base class implementation performs important checks. It always returns an empty mapping (i.e. makes no adjustments). It should always called be via super by custom implementations, ideally at the end of the custom implementation with already-adjusted mappings when any datasets are actually dropped, e.g.:

def adjustQuantum(self, inputs, outputs, label, data_id):
    # Filter out some dataset refs for one connection.
    connection, old_refs = inputs["my_input"]
    new_refs = [ref for ref in old_refs if ...]
    adjusted_inputs = {"my_input", (connection, new_refs)}
    # Update the original inputs so we can pass them to super.
    inputs.update(adjusted_inputs)
    # Can ignore outputs from super because they are guaranteed
    # to be empty.
    super().adjustQuantum(inputs, outputs, label_data_id)
    # Return only the connections we modified.
    return adjusted_inputs, {}

Removing outputs here is guaranteed to affect what is actually passed to PipelineTask.runQuantum, but its effect on the larger graph may be deferred to execution, depending on the context in which adjustQuantum is being run: if one quantum removes an output that is needed by a second quantum as input, the second quantum may not be adjusted (and hence pruned or skipped) until that output is actually found to be missing at execution time.

Tasks that desire zip-iteration consistency between any combinations of connections that have the same data ID should generally implement adjustQuantum to achieve this, even if they could also run that logic during execution; this allows the system to see outputs that will not be produced because the corresponding input is missing as early as possible.

buildDatasetRefs(quantum: Quantum) Tuple[InputQuantizedConnection, OutputQuantizedConnection]

Builds QuantizedConnections corresponding to input Quantum

Parameters:
quantumlsst.daf.butler.Quantum

Quantum object which defines the inputs and outputs for a given unit of processing

Returns:
retValtuple of (InputQuantizedConnection,

OutputQuantizedConnection) Namespaces mapping attribute names (identifiers of connections) to butler references defined in the input lsst.daf.butler.Quantum