DatasetRecordStorage¶
- class lsst.daf.butler.registry.interfaces.DatasetRecordStorage(datasetType: DatasetType)¶
Bases:
ABC
An interface that manages the records associated with a particular
DatasetType
.- Parameters:
- datasetType
DatasetType
Dataset type whose records this object manages.
- datasetType
Methods Summary
associate
(collection, datasets)Associate one or more datasets with a collection.
certify
(collection, datasets, timespan)Associate one or more datasets with a calibration collection and a validity range within it.
decertify
(collection, timespan, *[, dataIds])Remove or adjust datasets to clear a validity range within a calibration collection.
delete
(datasets)Fully delete the given datasets from the registry.
disassociate
(collection, datasets)Remove one or more datasets from a collection.
find
(collection, dataId[, timespan])Search a collection for a dataset with the given data ID.
import_
(run, datasets[, idGenerationMode, ...])Insert one or more dataset entries into the database.
insert
(run, dataIds[, idGenerationMode])Insert one or more dataset entries into the database.
select
(*collections[, dataId, id, run, ...])Return a SQLAlchemy object that represents a
SELECT
query for thisDatasetType
.Methods Documentation
- abstract associate(collection: CollectionRecord, datasets: Iterable[DatasetRef]) None ¶
Associate one or more datasets with a collection.
- Parameters:
- collection
CollectionRecord
The record object describing the collection.
collection.type
must beTAGGED
.- datasets
Iterable
[DatasetRef
] Datasets to be associated. All datasets must be resolved and have the same
DatasetType
asself
.
- collection
- Raises:
- AmbiguousDatasetError
Raised if any of the given
DatasetRef
instances is unresolved.
Notes
Associating a dataset with into collection that already contains a different dataset with the same
DatasetType
and data ID will remove the existing dataset from that collection.Associating the same dataset into a collection multiple times is a no-op, but is still not permitted on read-only databases.
- abstract certify(collection: CollectionRecord, datasets: Iterable[DatasetRef], timespan: Timespan) None ¶
Associate one or more datasets with a calibration collection and a validity range within it.
- Parameters:
- collection
CollectionRecord
The record object describing the collection.
collection.type
must beCALIBRATION
.- datasets
Iterable
[DatasetRef
] Datasets to be associated. All datasets must be resolved and have the same
DatasetType
asself
.- timespan
Timespan
The validity range for these datasets within the collection.
- collection
- Raises:
- AmbiguousDatasetError
Raised if any of the given
DatasetRef
instances is unresolved.- ConflictingDefinitionError
Raised if the collection already contains a different dataset with the same
DatasetType
and data ID and an overlapping validity range.- CollectionTypeError
Raised if
collection.type is not CollectionType.CALIBRATION
or ifself.datasetType.isCalibration() is False
.
- abstract decertify(collection: CollectionRecord, timespan: Timespan, *, dataIds: Optional[Iterable[DataCoordinate]] = None) None ¶
Remove or adjust datasets to clear a validity range within a calibration collection.
- Parameters:
- collection
CollectionRecord
The record object describing the collection.
collection.type
must beCALIBRATION
.- timespan
Timespan
The validity range to remove datasets from within the collection. Datasets that overlap this range but are not contained by it will have their validity ranges adjusted to not overlap it, which may split a single dataset validity range into two.
- dataIds
Iterable
[DataCoordinate
], optional Data IDs that should be decertified within the given validity range If
None
, all data IDs forself.datasetType
will be decertified.
- collection
- Raises:
- CollectionTypeError
Raised if
collection.type is not CollectionType.CALIBRATION
.
- abstract delete(datasets: Iterable[DatasetRef]) None ¶
Fully delete the given datasets from the registry.
- Parameters:
- datasets
Iterable
[DatasetRef
] Datasets to be deleted. All datasets must be resolved and have the same
DatasetType
asself
.
- datasets
- Raises:
- AmbiguousDatasetError
Raised if any of the given
DatasetRef
instances is unresolved.
- abstract disassociate(collection: CollectionRecord, datasets: Iterable[DatasetRef]) None ¶
Remove one or more datasets from a collection.
- Parameters:
- collection
CollectionRecord
The record object describing the collection.
collection.type
must beTAGGED
.- datasets
Iterable
[DatasetRef
] Datasets to be disassociated. All datasets must be resolved and have the same
DatasetType
asself
.
- collection
- Raises:
- AmbiguousDatasetError
Raised if any of the given
DatasetRef
instances is unresolved.
- abstract find(collection: CollectionRecord, dataId: DataCoordinate, timespan: Optional[Timespan] = None) Optional[DatasetRef] ¶
Search a collection for a dataset with the given data ID.
- Parameters:
- collection
CollectionRecord
The record object describing the collection to search for the dataset. May have any
CollectionType
.- dataId: `DataCoordinate`
Complete (but not necessarily expanded) data ID to search with, with
dataId.graph == self.datasetType.dimensions
.- timespan
Timespan
, optional A timespan that the validity range of the dataset must overlap. Required if
collection.type is CollectionType.CALIBRATION
, and ignored otherwise.
- collection
- Returns:
- ref
DatasetRef
A resolved
DatasetRef
(without components populated), orNone
if no matching dataset was found.
- ref
- abstract import_(run: RunRecord, datasets: Iterable[DatasetRef], idGenerationMode: DatasetIdGenEnum = DatasetIdGenEnum.UNIQUE, reuseIds: bool = False) Iterator[DatasetRef] ¶
Insert one or more dataset entries into the database.
- Parameters:
- run
RunRecord
The record object describing the
RUN
collection this dataset will be associated with.- datasets
Iterable
ofDatasetRef
Datasets to be inserted. Datasets can specify
id
attribute which will be used for inserted datasets. All dataset IDs must have the same type (int
oruuid.UUID
), if type of dataset IDs does not match type supported by this class then IDs will be ignored and new IDs will be generated by backend.- idGenerationMode
DatasetIdGenEnum
With
UNIQUE
each new dataset is inserted with its new unique ID. With non-UNIQUE
mode ID is computed from some combination of dataset type, dataId, and run collection name; if the same ID is already in the database then new record is not inserted.- reuseIds
bool
, optional If
True
then forces re-use of imported dataset IDs for integer IDs which are normally generated as auto-incremented; exception will be raised if imported IDs clash with existing ones. This option has no effect on the use of globally-unique IDs which are always re-used (or generated if integer IDs are being imported).
- run
- Returns:
- datasets
Iterable
[DatasetRef
] References to the inserted or existing datasets.
- datasets
Notes
The
datasetType
andrun
attributes of datasets are supposed to be identical across all datasets but this is not checked and it should be enforced by higher level registry code. This method does not need to use those attributes from datasets, onlydataId
andid
are relevant.
- abstract insert(run: RunRecord, dataIds: Iterable[DataCoordinate], idGenerationMode: DatasetIdGenEnum = DatasetIdGenEnum.UNIQUE) Iterator[DatasetRef] ¶
Insert one or more dataset entries into the database.
- Parameters:
- run
RunRecord
The record object describing the
RUN
collection this dataset will be associated with.- dataIds
Iterable
[DataCoordinate
] Expanded data IDs (
DataCoordinate
instances) for the datasets to be added. The dimensions of all data IDs must be the same asself.datasetType.dimensions
.- idMode
DatasetIdGenEnum
With
UNIQUE
each new dataset is inserted with its new unique ID. With non-UNIQUE
mode ID is computed from some combination of dataset type, dataId, and run collection name; if the same ID is already in the database then new record is not inserted.
- run
- Returns:
- datasets
Iterable
[DatasetRef
] References to the inserted datasets.
- datasets
- abstract select(*collections: CollectionRecord, dataId: SimpleQuery.Select.Or[DataCoordinate] = <class 'lsst.daf.butler.core.simpleQuery.SimpleQuery.Select'>, id: SimpleQuery.Select.Or[Optional[DatasetId]] = <class 'lsst.daf.butler.core.simpleQuery.SimpleQuery.Select'>, run: SimpleQuery.Select.Or[None] = <class 'lsst.daf.butler.core.simpleQuery.SimpleQuery.Select'>, timespan: SimpleQuery.Select.Or[Optional[Timespan]] = <class 'lsst.daf.butler.core.simpleQuery.SimpleQuery.Select'>, ingestDate: SimpleQuery.Select.Or[Optional[Timespan]] = None) sqlalchemy.sql.Selectable ¶
Return a SQLAlchemy object that represents a
SELECT
query for thisDatasetType
.All arguments can either be a value that constrains the query or the
SimpleQuery.Select
tag object to indicate that the value should be returned in the columns in theSELECT
clause. The default isSimpleQuery.Select
.- Parameters:
- *collections
CollectionRecord
The record object(s) describing the collection(s) to query. May not be of type
CollectionType.CHAINED
. If multiple collections are passed, the query will search all of them in an unspecified order, and all collections must have the same type.- dataId
DataCoordinate
orSelect
The data ID to restrict results with, or an instruction to return the data ID via columns with names
self.datasetType.dimensions.names
.- id
DatasetId
,Select
or None, The primary key value for the dataset, an instruction to return it via a
id
column, orNone
to ignore it entirely.- run
None
orSelect
If
Select
(default), include the dataset’s run key value (as column labeled with the return value ofCollectionManager.getRunForeignKeyName
). IfNone
, do not include this column (to constrain the run, pass aRunRecord
as thecollection
argument instead).- timespan
None
,Select
, orTimespan
If
Select
(default), include the validity range timespan in the result columns. If aTimespan
instance, constrain the results to those whose validity ranges overlap that given timespan. Ignored for collection types other thanCALIBRATION`
, butNone
should be passed explicitly if a mix ofCALIBRATION
and other types are passed in.- ingestDate
None
,Select
, orTimespan
If
Select
include the ingest timestamp in the result columns. If aTimespan
instance, constrain the results to those whose ingest times which are inside given timespan and also include timestamp in the result columns. IfNone
(default) then there is no constraint and timestamp is not returned.
- *collections
- Returns:
- query
sqlalchemy.sql.Selectable
A SQLAlchemy object representing a simple
SELECT
query.
- query