MockRepo¶
- class lsst.pipe.base.tests.mocks.MockRepo(butler: Butler, input_run: str = 'input_run', input_chain: str = 'input_chain', input_children: Iterable[str] = ())¶
Bases:
ABCA test helper that populates a butler repository for task execution.
- Parameters:
- butler
lsst.daf.butler.Butler Butler to use for at least quantum graph building. Must be writeable.
- input_run
str, optional Name of a
RUNcollection that will be used as an input to quantum graph generation. Input datasets created by the helper are added to this collection.- input_chain
str, optional Name of a
CHAINEDcollection that will be the direct input to quantum graph generation. This always includesinput_run.- input_children
strorIterable[str], optional Additional collections to include in
input_chain.
- butler
Methods Summary
add_task([label, task_class, config, ...])Add a task to the helper's pipeline graph.
insert_datasets(dataset_type[, register])Insert input datasets into the test repository.
make_quantum_graph(*[, output, output_run, ...])Make a quantum graph from the pipeline task and internal data repository.
make_quantum_graph_builder(*[, output_run, ...])Make a quantum graph builder from the pipeline task and internal data repository.
Make a single-quantum executor.
Methods Documentation
- add_task(label: str | None = None, *, task_class: type[lsst.pipe.base.tests.mocks._pipeline_task.DynamicTestPipelineTask] = <class 'lsst.pipe.base.tests.mocks._pipeline_task.DynamicTestPipelineTask'>, config: ~lsst.pipe.base.tests.mocks._pipeline_task.DynamicTestPipelineTaskConfig | None = None, dimensions: ~collections.abc.Iterable[str] | None = None, inputs: ~collections.abc.Mapping[str, ~lsst.pipe.base.tests.mocks._pipeline_task.DynamicConnectionConfig] | None = None, outputs: ~collections.abc.Mapping[str, ~lsst.pipe.base.tests.mocks._pipeline_task.DynamicConnectionConfig] | None = None, prerequisite_inputs: ~collections.abc.Mapping[str, ~lsst.pipe.base.tests.mocks._pipeline_task.DynamicConnectionConfig] | None = None, init_inputs: ~collections.abc.Mapping[str, ~lsst.pipe.base.tests.mocks._pipeline_task.DynamicConnectionConfig] | None = None, init_outputs: ~collections.abc.Mapping[str, ~lsst.pipe.base.tests.mocks._pipeline_task.DynamicConnectionConfig] | None = None) None¶
Add a task to the helper’s pipeline graph.
- Parameters:
- label
str, optional Label for the task. If not provided, the task name will be
task_auto{self.last_auto_task_index}, with that variable incremented.- task_class
type, optional Subclass of
DynamicTestPipelineTaskto use.- config
DynamicTestPipelineTaskConfig, optional Task configuration to use. Note that the dimensions are always overridden by the
dimensionsargument andinputsandoutputsare updated by those arguments unless they are explicitly set to empty dictionaries.- dimensions
Iterable[str], optional Dimensions of the task and any automatically-added input or output connection.
- inputs
Mapping[str,DynamicConnectionConfig], optional Input connections to add. If not provided, a single connection is added with the same dimensions as the task and dataset type name
dataset_auto{self.last_auto_dataset_type_index}.- outputs
Mapping[str,DynamicConnectionConfig], optional Output connections to add. If not provided, a single connection is added with the same dimensions as the task and dataset type name
dataset_auto{self.last_auto_dataset_type_index}, with that variable incremented first.- prerequisite_inputs
Mapping[str,DynamicConnectionConfig], optional Prerequisite input connections to add. Defaults to an empty mapping.
- init_inputs
Mapping[str,DynamicConnectionConfig], optional Init input connections to add. Defaults to an empty mapping.
- init_outputs
Mapping[str,DynamicConnectionConfig], optional Init output connections to add. Defaults to an empty mapping.
- label
Notes
The defaults for this method’s arguments are designed to allow it to be called in succession to create a sequence of “one-to-one” tasks in which each consumes the output of the last.
- insert_datasets(dataset_type: DatasetType | str, register: bool = True, *args: Any, **kwargs: Any) list[lsst.daf.butler._dataset_ref.DatasetRef]¶
Insert input datasets into the test repository.
- Parameters:
- dataset_type
DatasetTypeorstr Dataset type or name. If a name, it must be included in the pipeline graph.
- register
bool, optional Whether to register the dataset type. If
False, the dataset type must already be registered.- *args
object Forwarded to
query_data_ids.- **kwargs
object Forwarded to
query_data_ids.
- dataset_type
- Returns:
- refs
list[lsst.daf.butler.DatasetRef] References to the inserted datasets.
- refs
Notes
For dataset types with dimensions that are queryable, this queries for all data IDs in the repository (forwarding
*argsand**kwargsfor e.g.wherestrings). For skypix dimensions, this queries for both patches and visit-detector regions (forwarding*args`and**kwargsto both) and uses all overlapping sky pixels. Dataset types with a mix of skypix and queryable dimensions are not supported.
- make_quantum_graph(*, output: str | None = None, output_run: str = 'output_run', insert_mocked_inputs: bool = True, register_output_dataset_types: bool = True) PredictedQuantumGraph¶
Make a quantum graph from the pipeline task and internal data repository.
- Parameters:
- output
strorNone, optional Name of the output chained collection to embed within the quantum graph. Note that this does not actually create this collection.
- output_run
str, optional Name of the
RUNcollection for execution outputs. Note that this does not actually create this collection.- insert_mocked_inputs
bool, optional Whether to automatically insert datasets for all overall inputs to the pipeline graph whose dataset types have not already been registered. If set to
False, inputs must be provided by imported YAML files or explicit calls toinsert_datasets, which provides more fine-grained control over the data IDs of the datasets.- register_output_dataset_types
bool, optional If
True, register all output dataset types.
- output
- Returns:
- qg
quantum_graph.PredictedQuantumGraph Quantum graph. Datastore records will not be attached, since the test helper does not actually have a datastore.
- qg
- make_quantum_graph_builder(*, output_run: str = 'output_run', insert_mocked_inputs: bool = True, register_output_dataset_types: bool = True) AllDimensionsQuantumGraphBuilder¶
Make a quantum graph builder from the pipeline task and internal data repository.
- Parameters:
- output_run
str, optional Name of the
RUNcollection for execution outputs. Note that this does not actually create this collection.- insert_mocked_inputs
bool, optional Whether to automatically insert datasets for all overall inputs to the pipeline graph whose dataset types have not already been registered. If set to
False, inputs must be provided by imported YAML files or explicit calls toinsert_datasets, which provides more fine-grained control over the data IDs of the datasets.- register_output_dataset_types
bool, optional If
True, register all output dataset types.
- output_run
- Returns:
- builder
all_dimensions_quantum_graph_builder.AllDimensionsQuantumGraphBuilder Quantum graph builder. Note that
attach_datastore_records=Falsemust be passed tobuild, since the helper’s butler does not have a datastore.
- builder
- abstract make_single_quantum_executor(qg: PredictedQuantumGraph) tuple[lsst.pipe.base.single_quantum_executor.SingleQuantumExecutor, lsst.daf.butler._limited_butler.LimitedButler]¶
Make a single-quantum executor.
- Parameters:
- qg
quantum_graph.PredictedQuantumGraph Graph whose quanta the executor must be capable of executing.
- qg
- Returns:
- executor
single_quantum_executor.SingleQuantumExecutor An executor for a single quantum.
- butler
lsst.daf.butler.LimitedButler The butler that the executor will write to.
- executor