DatasetRecordStorage

class lsst.daf.butler.registry.interfaces.DatasetRecordStorage(datasetType: lsst.daf.butler.core.datasets.type.DatasetType)

Bases: abc.ABC

An interface that manages the records associated with a particular DatasetType.

Parameters:
datasetType : DatasetType

Dataset type whose records this object manages.

Methods Summary

associate(collection, datasets) Associate one or more datasets with a collection.
certify(collection, datasets, timespan) Associate one or more datasets with a calibration collection and a validity range within it.
decertify(collection, timespan, *, dataIds) Remove or adjust datasets to clear a validity range within a calibration collection.
delete(datasets) Fully delete the given datasets from the registry.
disassociate(collection, datasets) Remove one or more datasets from a collection.
find(collection, dataId, timespan) Search a collection for a dataset with the given data ID.
import_(run, datasets, idGenerationMode, …) Insert one or more dataset entries into the database.
insert(run, dataIds, idGenerationMode) Insert one or more dataset entries into the database.
select(*collections, dataId, id, run, …) Return a SQLAlchemy object that represents a SELECT query for this DatasetType.

Methods Documentation

associate(collection: CollectionRecord, datasets: Iterable[DatasetRef]) → None

Associate one or more datasets with a collection.

Parameters:
collection : CollectionRecord

The record object describing the collection. collection.type must be TAGGED.

datasets : Iterable [ DatasetRef ]

Datasets to be associated. All datasets must be resolved and have the same DatasetType as self.

Raises:
AmbiguousDatasetError

Raised if any of the given DatasetRef instances is unresolved.

Notes

Associating a dataset with into collection that already contains a different dataset with the same DatasetType and data ID will remove the existing dataset from that collection.

Associating the same dataset into a collection multiple times is a no-op, but is still not permitted on read-only databases.

certify(collection: CollectionRecord, datasets: Iterable[DatasetRef], timespan: Timespan) → None

Associate one or more datasets with a calibration collection and a validity range within it.

Parameters:
collection : CollectionRecord

The record object describing the collection. collection.type must be CALIBRATION.

datasets : Iterable [ DatasetRef ]

Datasets to be associated. All datasets must be resolved and have the same DatasetType as self.

timespan : Timespan

The validity range for these datasets within the collection.

Raises:
AmbiguousDatasetError

Raised if any of the given DatasetRef instances is unresolved.

ConflictingDefinitionError

Raised if the collection already contains a different dataset with the same DatasetType and data ID and an overlapping validity range.

TypeError

Raised if collection.type is not CollectionType.CALIBRATION or if self.datasetType.isCalibration() is False.

decertify(collection: CollectionRecord, timespan: Timespan, *, dataIds: Optional[Iterable[DataCoordinate]] = None) → None

Remove or adjust datasets to clear a validity range within a calibration collection.

Parameters:
collection : CollectionRecord

The record object describing the collection. collection.type must be CALIBRATION.

timespan : Timespan

The validity range to remove datasets from within the collection. Datasets that overlap this range but are not contained by it will have their validity ranges adjusted to not overlap it, which may split a single dataset validity range into two.

dataIds : Iterable [ DataCoordinate ], optional

Data IDs that should be decertified within the given validity range If None, all data IDs for self.datasetType will be decertified.

Raises:
TypeError

Raised if collection.type is not CollectionType.CALIBRATION.

delete(datasets: Iterable[lsst.daf.butler.core.datasets.ref.DatasetRef]) → None

Fully delete the given datasets from the registry.

Parameters:
datasets : Iterable [ DatasetRef ]

Datasets to be deleted. All datasets must be resolved and have the same DatasetType as self.

Raises:
AmbiguousDatasetError

Raised if any of the given DatasetRef instances is unresolved.

disassociate(collection: CollectionRecord, datasets: Iterable[DatasetRef]) → None

Remove one or more datasets from a collection.

Parameters:
collection : CollectionRecord

The record object describing the collection. collection.type must be TAGGED.

datasets : Iterable [ DatasetRef ]

Datasets to be disassociated. All datasets must be resolved and have the same DatasetType as self.

Raises:
AmbiguousDatasetError

Raised if any of the given DatasetRef instances is unresolved.

find(collection: CollectionRecord, dataId: DataCoordinate, timespan: Optional[Timespan] = None) → Optional[DatasetRef]

Search a collection for a dataset with the given data ID.

Parameters:
collection : CollectionRecord

The record object describing the collection to search for the dataset. May have any CollectionType.

dataId: `DataCoordinate`

Complete (but not necessarily expanded) data ID to search with, with dataId.graph == self.datasetType.dimensions.

timespan : Timespan, optional

A timespan that the validity range of the dataset must overlap. Required if collection.type is CollectionType.CALIBRATION, and ignored otherwise.

Returns:
ref : DatasetRef

A resolved DatasetRef (without components populated), or None if no matching dataset was found.

import_(run: RunRecord, datasets: Iterable[DatasetRef], idGenerationMode: DatasetIdGenEnum = <DatasetIdGenEnum.UNIQUE: 0>, reuseIds: bool = False) → Iterator[DatasetRef]

Insert one or more dataset entries into the database.

Parameters:
run : RunRecord

The record object describing the RUN collection this dataset will be associated with.

datasets : Iterable of DatasetRef

Datasets to be inserted. Datasets can specify id attribute which will be used for inserted datasets. All dataset IDs must have the same type (int or uuid.UUID), if type of dataset IDs does not match type supported by this class then IDs will be ignored and new IDs will be generated by backend.

idGenerationMode : DatasetIdGenEnum

With UNIQUE each new dataset is inserted with its new unique ID. With non-UNIQUE mode ID is computed from some combination of dataset type, dataId, and run collection name; if the same ID is already in the database then new record is not inserted.

reuseIds : bool, optional

If True then forces re-use of imported dataset IDs for integer IDs which are normally generated as auto-incremented; exception will be raised if imported IDs clash with existing ones. This option has no effect on the use of globally-unique IDs which are always re-used (or generated if integer IDs are being imported).

Returns:
datasets : Iterable [ DatasetRef ]

References to the inserted or existing datasets.

Notes

The datasetType and run attributes of datasets are supposed to be identical across all datasets but this is not checked and it should be enforced by higher level registry code. This method does not need to use those attributes from datasets, only dataId and id are relevant.

insert(run: RunRecord, dataIds: Iterable[DataCoordinate], idGenerationMode: DatasetIdGenEnum = <DatasetIdGenEnum.UNIQUE: 0>) → Iterator[DatasetRef]

Insert one or more dataset entries into the database.

Parameters:
run : RunRecord

The record object describing the RUN collection this dataset will be associated with.

dataIds : Iterable [ DataCoordinate ]

Expanded data IDs (DataCoordinate instances) for the datasets to be added. The dimensions of all data IDs must be the same as self.datasetType.dimensions.

idMode : DatasetIdGenEnum

With UNIQUE each new dataset is inserted with its new unique ID. With non-UNIQUE mode ID is computed from some combination of dataset type, dataId, and run collection name; if the same ID is already in the database then new record is not inserted.

Returns:
datasets : Iterable [ DatasetRef ]

References to the inserted datasets.

select(*collections, dataId: SimpleQuery.Select.Or[DataCoordinate] = <class 'lsst.daf.butler.core.simpleQuery.SimpleQuery.Select'>, id: SimpleQuery.Select.Or[Optional[DatasetId]] = <class 'lsst.daf.butler.core.simpleQuery.SimpleQuery.Select'>, run: SimpleQuery.Select.Or[None] = <class 'lsst.daf.butler.core.simpleQuery.SimpleQuery.Select'>, timespan: SimpleQuery.Select.Or[Optional[Timespan]] = <class 'lsst.daf.butler.core.simpleQuery.SimpleQuery.Select'>, ingestDate: SimpleQuery.Select.Or[Optional[Timespan]] = None) → SimpleQuery

Return a SQLAlchemy object that represents a SELECT query for this DatasetType.

All arguments can either be a value that constrains the query or the SimpleQuery.Select tag object to indicate that the value should be returned in the columns in the SELECT clause. The default is SimpleQuery.Select.

Parameters:
*collections : CollectionRecord

The record object(s) describing the collection(s) to query. May not be of type CollectionType.CHAINED. If multiple collections are passed, the query will search all of them in an unspecified order, and all collections must have the same type.

dataId : DataCoordinate or Select

The data ID to restrict results with, or an instruction to return the data ID via columns with names self.datasetType.dimensions.names.

id : DatasetId, Select or None,

The primary key value for the dataset, an instruction to return it via a id column, or None to ignore it entirely.

run : None or Select

If Select (default), include the dataset’s run key value (as column labeled with the return value of CollectionManager.getRunForiegnKeyName). If None, do not include this column (to constrain the run, pass a RunRecord as the collection argument instead).

timespan : None, Select, or Timespan

If Select (default), include the validity range timespan in the result columns. If a Timespan instance, constrain the results to those whose validity ranges overlap that given timespan. Ignored unless collection.type is CollectionType.CALIBRATION.

ingestDate : None, Select, or Timespan

If Select include the ingest timestamp in the result columns. If a Timespan instance, constrain the results to those whose ingest times which are inside given timespan and also include timestamp in the result columns. If None (default) then there is no constraint and timestamp is not returned.

Returns:
query : SimpleQuery

A struct containing the SQLAlchemy object that representing a simple SELECT query.