HighResolutionHipsQuantumGraphBuilder¶
- class lsst.pipe.tasks.hips.HighResolutionHipsQuantumGraphBuilder(pipeline_graph, butler, *, input_collections=None, output_run=None, constraint_order, constraint_ranges, where='')¶
Bases:
QuantumGraphBuilderA custom a
lsst.pipe.base.QuantumGraphBuilderfor runningHighResolutionHipsTaskonly.This is a temporary workaround for incomplete butler query support for HEALPix dimensions.
- Parameters:
- pipeline_graph
lsst.pipe.base.PipelineGraph Pipeline graph with exactly one task, which must be a configuration of
HighResolutionHipsTask.- butler
lsst.daf.butler.Butler Client for the butler data repository. May be read-only.
- input_collections
strorIterable[str], optional Collection or collections to search for input datasets, in order. If not provided,
butler.collectionswill be searched.- output_run
str, optional Name of the output collection. If not provided,
butler.runwill be used.- constraint_order
int HEALPix order used to constrain which quanta are generated, via
constraint_indices. This should be a coarser grid (smaller order) than the order used for the task’s quantum and output data IDs, and ideally something between the spatial scale of a patch or the data repository’s “common skypix” system (usuallyhtm7).- constraint_ranges
lsst.sphgeom.RangeSet RangeSet that describes constraint pixels (HEALPix NEST, with order
constraint_order) to constrain generated quanta.- where
str, optional A boolean
strexpression of the form accepted byRegistry.queryDatasetsto constrain input datasets. This may contain a constraint on tracts, patches, or bands, but not HEALPix indices. Constraints on tracts and patches should usually be unnecessary, however - existing coadds that overlap the given HEALpix indices will be selected without such a constraint, and providing one may reject some that should normally be included.
- pipeline_graph
Attributes Summary
Definitions of all data dimensions.
Methods Summary
build([metadata, attach_datastore_records])Build the quantum graph.
process_subgraph(subgraph)Build the rough structure for an independent subset of the
QuantumGraphand query for relevant existing datasets.Attributes Documentation
- universe¶
Definitions of all data dimensions.
Methods Documentation
- build(metadata: Mapping[str, Any] | None = None, attach_datastore_records: bool = True) QuantumGraph¶
Build the quantum graph.
- Parameters:
- metadata
Mapping, optional Flexible metadata to add to the quantum graph.
- attach_datastore_records
bool, optional Whether to include datastore records in the graph. Required for
lsst.daf.butler.QuantumBackedButlerexecution.
- metadata
- Returns:
- quantum_graph
QuantumGraph DAG describing processing to be performed.
- quantum_graph
Notes
External code is expected to construct a
QuantumGraphBuilderand then call this method exactly once. See class documentation for details on what it does.
- process_subgraph(subgraph)¶
Build the rough structure for an independent subset of the
QuantumGraphand query for relevant existing datasets.- Parameters:
- subgraph
pipeline_graph.PipelineGraph Subset of the pipeline graph that should be processed by this call. This is always resolved and topologically sorted. It should not be modified.
- subgraph
- Returns:
- skeleton
quantum_graph_skeleton.QuantumGraphSkeleton Class representing an initial quantum graph. See
quantum_graph_skeleton.QuantumGraphSkeletondocs for details. After this is returned, the object may be modified in-place in unspecified ways.
- skeleton
Notes
In addition to returning a
quantum_graph_skeleton.QuantumGraphSkeleton, this method should populate theexisting_datasetsstructure by querying for all relevant datasets with non-empty data IDs (those with empty data IDs will already be present). In particular:inputsmust always be populated with all overall-input datasets (but not prerequisites), by queryinginput_collections;outputs_for_skipmust be populated with any intermediate our output datasets present inskip_existing_in(it can be ignored ifskip_existing_inis empty);outputs_in_the_waymust be populated with any intermediate or output datasets present inoutput_run, ifoutput_run_exists(it can be ignored ifoutput_run_existsisFalse). Note that the presence of such datasets is not automatically an error, even ifclobber is `False, as these may be quanta that will be skipped.inputsmust be populated with all prerequisite-input datasets that were included in the skeleton, by queryinginput_collections(not all prerequisite inputs need to be included in the skeleton, but the base class can only use per-quantum queries to find them, and that can be slow when there are many quanta).
Dataset types should never be components and should always use the “common” storage class definition in
pipeline_graph.DatasetTypeNode(which is the data repository definition when the dataset type is registered).