InMemoryRepo¶
- class lsst.pipe.base.tests.mocks.InMemoryRepo(*args: str | ResourcePath, registry_config: RegistryConfig | None = None, input_run: str = 'input_run', input_chain: str = 'input_chain', output_run: str = 'output_run', use_import_collections_as_input: bool | str | Iterable[str] = True, data_root: str | ParseResult | ResourcePath | Path | None = 'resource://lsst.daf.butler/tests/registry_data')¶
Bases:
object
A test helper that simulates a butler repository for task execution without any disk I/O.
- Parameters:
- *args
str
orlsst.resources.ResourcePath
Butler YAML import files to load into the test repository.
- registry_config
lsst.daf.butler.RegistryConfig
, optional Registry configuration for the repository.
- input_run
str
, optional Name of a
RUN
collection that will be used as an input to quantum graph generation. Input datasets created by the helper are added to this collection.- input_chain
str
, optional Name of a
CHAINED
collection that will be the direct input to quantum graph generation. This always includesinput_run
.- output_run
str
, optional Name of a
RUN
collection for execution outputs.- use_import_collections_as_input
bool
str
, orIterable
[str
], optional Additional collections from YAML import files to include in
input_chain
, orTrue
to include all such collections (in chain-flattened lexicographical order).- data_rootconvertible to
lsst.resources.ResourcePath
, optional Root directory to join to each element in
*args
. Defaults to thelsst.daf.butler.tests.registry_data
package.
- *args
Notes
This helper maintains an
pipeline_graph.PipelineGraph
and a no-datastore butler backed by an in-memory SQLite database for use in quantum graph generation.Methods Summary
add_task
([label, task_class, config, ...])Add a task to the helper's pipeline graph.
insert_datasets
(dataset_type[, register])Insert input datasets into the test repository.
Make a test limited butler for execution.
make_quantum_graph_builder
(*[, ...])Make a quantum graph builder from the pipeline task and internal data repository.
Make a single-quantum executor backed by a new limited butler.
Methods Documentation
- add_task(label: str | None = None, *, task_class: type[lsst.pipe.base.tests.mocks._pipeline_task.DynamicTestPipelineTask] = <class 'lsst.pipe.base.tests.mocks._pipeline_task.DynamicTestPipelineTask'>, config: ~lsst.pipe.base.tests.mocks._pipeline_task.DynamicTestPipelineTaskConfig | None = None, dimensions: ~collections.abc.Iterable[str] | None = None, inputs: ~collections.abc.Mapping[str, ~lsst.pipe.base.tests.mocks._pipeline_task.DynamicConnectionConfig] | None = None, outputs: ~collections.abc.Mapping[str, ~lsst.pipe.base.tests.mocks._pipeline_task.DynamicConnectionConfig] | None = None, prerequisite_inputs: ~collections.abc.Mapping[str, ~lsst.pipe.base.tests.mocks._pipeline_task.DynamicConnectionConfig] | None = None, init_inputs: ~collections.abc.Mapping[str, ~lsst.pipe.base.tests.mocks._pipeline_task.DynamicConnectionConfig] | None = None, init_outputs: ~collections.abc.Mapping[str, ~lsst.pipe.base.tests.mocks._pipeline_task.DynamicConnectionConfig] | None = None) None ¶
Add a task to the helper’s pipeline graph.
- Parameters:
- label
str
, optional Label for the task. If not provided, the task name will be
task_auto{self.last_auto_task_index}
, with that variable incremented.- task_class
type
, optional Subclass of
DynamicTestPipelineTask
to use.- config
DynamicTestPipelineTaskConfig
, optional Task configuration to use. Note that the dimensions are always overridden by the
dimensions
argument andinputs
andoutputs
are updated by those arguments unless they are explicitly set to empty dictionaries.- dimensions
Iterable
[str
], optional Dimensions of the task and any automatically-added input or output connection.
- inputs
Mapping
[str
,DynamicConnectionConfig
], optional Input connections to add. If not provided, a single connection is added with the same dimensions as the task and dataset type name
dataset_auto{self.last_auto_dataset_type_index}
.- outputs
Mapping
[str
,DynamicConnectionConfig
], optional Output connections to add. If not provided, a single connection is added with the same dimensions as the task and dataset type name
dataset_auto{self.last_auto_dataset_type_index}
, with that variable incremented first.- prerequisite_inputs
Mapping
[str
,DynamicConnectionConfig
], optional Prerequisite input connections to add. Defaults to an empty mapping.
- init_inputs
Mapping
[str
,DynamicConnectionConfig
], optional Init input connections to add. Defaults to an empty mapping.
- init_outputs
Mapping
[str
,DynamicConnectionConfig
], optional Init output connections to add. Defaults to an empty mapping.
- label
Notes
The defaults for this method’s arguments are designed to allow it to be called in succession to create a sequence of “one-to-one” tasks in which each consumes the output of the last.
- insert_datasets(dataset_type: DatasetType | str, register: bool = True, *args: Any, **kwargs: Any) list[lsst.daf.butler._dataset_ref.DatasetRef] ¶
Insert input datasets into the test repository.
- Parameters:
- dataset_type
DatasetType
orstr
Dataset type or name. If a name, it must be included in the pipeline graph.
- register
bool
, optional Whether to register the dataset type. If
False
, the dataset type must already be registered.- *args
object
Forwarded to
query_data_ids
.- **kwargs
object
Forwarded to
query_data_ids
.
- dataset_type
- Returns:
- refs
list
[lsst.daf.butler.DatasetRef
] References to the inserted datasets.
- refs
Notes
For dataset types with dimensions that are queryable, this queries for all data IDs in the repository (forwarding
*args
and**kwargs
for e.g.where
strings). For skypix dimensions, this queries for both patches and visit-detector regions (forwarding*args`
and**kwargs
to both) and uses all overlapping sky pixels. Dataset types with a mix of skypix and queryable dimensions are not supported.
- make_limited_butler() InMemoryLimitedButler ¶
Make a test limited butler for execution.
- Returns:
- limited_butler
InMemoryLimitedButler
A limited butler that can be used for task execution.
- limited_butler
Notes
This queries the database-only butler used for quantum-graph generation for all datasets in the
input_chain
collection, and populates the limited butler with those that have a mock storage class. Other datasets are ignored, so they will appear as though they were present during quantum graph generation but absent during execution.
- make_quantum_graph_builder(*, insert_mocked_inputs: bool = True, register_output_dataset_types: bool = True) AllDimensionsQuantumGraphBuilder ¶
Make a quantum graph builder from the pipeline task and internal data repository.
- Parameters:
- insert_mocked_inputs
bool
, optional Whether to automatically insert datasets for all overall inputs to the pipeline graph whose dataset types have not already been registered. If set to
False
, inputs must be provided by imported YAML files or explicit calls toinsert_datasets
, which provides more fine-grained control over the data IDs of the datasets.- register_output_dataset_types
bool
, optional If
True
, register all output dataset types.
- insert_mocked_inputs
- Returns:
- builder
all_dimensions_quantum_graph_builder.AllDimensionsQuantumGraphBuilder
Quantum graph builder. Note that
attach_datastore_records=False
must be passed tobuild
, since the helper’s butler does not have a datastore.
- builder
- make_single_quantum_executor() tuple[lsst.pipe.base.single_quantum_executor.SingleQuantumExecutor, lsst.pipe.base.tests.in_memory_limited_butler.InMemoryLimitedButler] ¶
Make a single-quantum executor backed by a new limited butler.
- Returns:
- executor
single_quantum_executor.SingleQuantumExecutor
An executor for a single quantum.
- butler
InMemoryLimitedButler
The butler that the executor will write to.
- executor