ChainedDatastore¶
-
class
lsst.daf.butler.datastores.chainedDatastore.ChainedDatastore(config: Union[Config, str], bridgeManager: DatastoreRegistryBridgeManager, butlerRoot: str = None)¶ Bases:
lsst.daf.butler.DatastoreChained Datastores to allow read and writes from multiple datastores.
A ChainedDatastore is configured with multiple datastore configurations. A
put()is always sent to each datastore. Aget()operation is sent to each datastore in turn and the first datastore to return a valid dataset is used.Parameters: - config :
DatastoreConfigorstr Configuration. This configuration must include a
datastoresfield as a sequence of datastore configurations. The order in this sequence indicates the order to use for read operations.- bridgeManager :
DatastoreRegistryBridgeManager Object that manages the interface between
Registryand datastores.- butlerRoot :
str, optional New datastore root to use to override the configuration value. This root is sent to each child datastore.
Notes
ChainedDatastore never supports
Noneor"move"as aningesttransfer mode. It supports"copy","symlink","relsymlink"and"hardlink"if and only if all its child datastores do.Attributes Summary
containerKeyKey to specify where child datastores are configured. defaultConfigFilePath to configuration defaults. isEphemeralnamesNames associated with this datastore returned as a list. Methods Summary
emptyTrash(ignore_errors)Remove all datasets from the trash. exists(ref)Check if the dataset exists in one of the datastores. export(refs, *, directory, transfer)Export datasets for transfer to another data repository. fromConfig(config, bridgeManager, …)Create datastore from type specified in config file. get(ref, parameters, Any]] = None)Load an InMemoryDataset from the store. getLookupKeys()Return all the lookup keys relevant to this datastore. getURI(ref, predict)URI to the Dataset. getURIs(ref, predict)Return URIs associated with dataset. ingest(*datasets, transfer)Ingest one or more files into the datastore. put(inMemoryDataset, ref)Write a InMemoryDataset with a given DatasetRefto each datastore.remove(ref)Indicate to the datastore that a dataset can be removed. setConfigRoot(root, config, full, overwrite)Set any filesystem-dependent config options for child Datastores to be appropriate for a new empty repository with the given root. transaction()Context manager supporting Datastoretransactions.transfer(inputDatastore, ref)Retrieve a dataset from an input Datastore, and store the result in thisDatastore.trash(ref, ignore_errors)Indicate to the Datastore that a Dataset can be moved to the trash. validateConfiguration(entities, DatasetType, …)Validate some of the configuration for this datastore. validateKey(lookupKey, entity, DatasetType, …)Validate a specific look up key with supplied entity. Attributes Documentation
-
containerKey= 'datastores'¶ Key to specify where child datastores are configured.
-
defaultConfigFile= 'datastores/chainedDatastore.yaml'¶ Path to configuration defaults. Accessed within the
configsresource or relative to a search path. Can be None if no defaults specified.
-
isEphemeral= False¶
-
names¶ Names associated with this datastore returned as a list.
Can be different to
namefor a chaining datastore.
Methods Documentation
-
emptyTrash(ignore_errors: bool = True) → None¶ Remove all datasets from the trash.
Parameters: - ignore_errors :
bool, optional Determine whether errors should be ignored.
Notes
Some Datastores may implement this method as a silent no-op to disable Dataset deletion through standard interfaces.
- ignore_errors :
-
exists(ref: DatasetRef) → bool¶ Check if the dataset exists in one of the datastores.
Parameters: - ref :
DatasetRef Reference to the required dataset.
Returns: - ref :
-
export(refs: Iterable[DatasetRef], *, directory: Optional[str] = None, transfer: Optional[str] = None) → Iterable[FileDataset]¶ Export datasets for transfer to another data repository.
Parameters: - refs : iterable of
DatasetRef Dataset references to be exported.
- directory :
str, optional Path to a directory that should contain files corresponding to output datasets. Ignored if
transferisNone.- transfer :
str, optional Mode that should be used to move datasets out of the repository. Valid options are the same as those of the
transferargument toingest, and datastores may similarly signal that a transfer mode is not supported by raisingNotImplementedError.
Returns: - dataset : iterable of
DatasetTransfer Structs containing information about the exported datasets, in the same order as
refs.
Raises: - NotImplementedError
Raised if the given transfer mode is not supported.
- refs : iterable of
-
static
fromConfig(config: Config, bridgeManager: DatastoreRegistryBridgeManager, butlerRoot: Optional[Union[str, ButlerURI]] = None) → 'Datastore'¶ Create datastore from type specified in config file.
Parameters: - config :
Config Configuration instance.
- bridgeManager :
DatastoreRegistryBridgeManager Object that manages the interface between
Registryand datastores.- butlerRoot :
str, optional Butler root directory.
- config :
-
get(ref: DatasetRef, parameters: Optional[Mapping[str, Any]] = None) → Any¶ Load an InMemoryDataset from the store.
The dataset is returned from the first datastore that has the dataset.
Parameters: - ref :
DatasetRef Reference to the required Dataset.
- parameters :
dict StorageClass-specific parameters that specify, for example, a slice of the dataset to be loaded.
Returns: - inMemoryDataset :
object Requested dataset or slice thereof as an InMemoryDataset.
Raises: - FileNotFoundError
Requested dataset can not be retrieved.
- TypeError
Return value from formatter has unexpected type.
- ValueError
Formatter failed to process the dataset.
- ref :
-
getLookupKeys() → Set[LookupKey]¶ Return all the lookup keys relevant to this datastore.
Returns: - keys :
setofLookupKey The keys stored internally for looking up information based on
DatasetTypename orStorageClass.
- keys :
-
getURI(ref: DatasetRef, predict: bool = False) → ButlerURI¶ URI to the Dataset.
The returned URI is from the first datastore in the list that has the dataset with preference given to the first dataset coming from a permanent datastore. If no datastores have the dataset and prediction is allowed, the predicted URI for the first datastore in the list will be returned.
Parameters: Returns: - uri :
ButlerURI URI pointing to the dataset within the datastore. If the dataset does not exist in the datastore, and if
predictisTrue, the URI will be a prediction and will include a URI fragment “#predicted”.
Raises: - FileNotFoundError
A URI has been requested for a dataset that does not exist and guessing is not allowed.
- RuntimeError
Raised if a request is made for a single URI but multiple URIs are associated with this dataset.
Notes
If the datastore does not have entities that relate well to the concept of a URI the returned URI string will be descriptive. The returned URI is not guaranteed to be obtainable.
- uri :
-
getURIs(ref: DatasetRef, predict: bool = False) → Tuple[Optional[ButlerURI], Dict[str, ButlerURI]]¶ Return URIs associated with dataset.
Parameters: - ref :
DatasetRef Reference to the required dataset.
- predict :
bool, optional If the datastore does not know about the dataset, should it return a predicted URI or not?
Returns: Notes
The returned URI is from the first datastore in the list that has the dataset with preference given to the first dataset coming from a permanent datastore. If no datastores have the dataset and prediction is allowed, the predicted URI for the first datastore in the list will be returned.
- ref :
-
ingest(*datasets, transfer: Optional[str] = None) → None¶ Ingest one or more files into the datastore.
Parameters: - datasets :
FileDataset Each positional argument is a struct containing information about a file to be ingested, including its path (either absolute or relative to the datastore root, if applicable), a complete
DatasetRef(withdataset_id not None), and optionally a formatter class or its fully-qualified string name. If a formatter is not provided, the one the datastore would use forputon that dataset is assumed.- transfer :
str, optional How (and whether) the dataset should be added to the datastore. If
None(default), the file must already be in a location appropriate for the datastore (e.g. within its root directory), and will not be modified. Other choices include “move”, “copy”, “link”, “symlink”, “relsymlink”, and “hardlink”. “link” is a special transfer mode that will first try to make a hardlink and if that fails a symlink will be used instead. “relsymlink” creates a relative symlink rather than use an absolute path. Most datastores do not support all transfer modes. “auto” is a special option that will let the data store choose the most natural option for itself.
Raises: - NotImplementedError
Raised if the datastore does not support the given transfer mode (including the case where ingest is not supported at all).
- DatasetTypeNotSupportedError
Raised if one or more files to be ingested have a dataset type that is not supported by the datastore.
- FileNotFoundError
Raised if one of the given files does not exist.
- FileExistsError
Raised if transfer is not
Nonebut the (internal) location the file would be moved to is already occupied.
Notes
Subclasses should implement
_prepIngestand_finishIngestinstead of implementingingestdirectly. Datastores that hold and delegate to child datastores may want to call those methods as well.Subclasses are encouraged to document their supported transfer modes in their class documentation.
- datasets :
-
put(inMemoryDataset: Any, ref: DatasetRef) → None¶ Write a InMemoryDataset with a given
DatasetRefto each datastore.The put() to child datastores can fail with
DatasetTypeNotSupportedError. The put() for this datastore will be deemed to have succeeded so long as at least one child datastore accepted the inMemoryDataset.Parameters: - inMemoryDataset :
object The dataset to store.
- ref :
DatasetRef Reference to the associated Dataset.
Raises: - TypeError
Supplied object and storage class are inconsistent.
- DatasetTypeNotSupportedError
All datastores reported
DatasetTypeNotSupportedError.
- inMemoryDataset :
-
remove(ref: DatasetRef) → None¶ Indicate to the datastore that a dataset can be removed.
The dataset will be removed from each datastore. The dataset is not required to exist in every child datastore.
Parameters: - ref :
DatasetRef Reference to the required dataset.
Raises: - FileNotFoundError
Attempt to remove a dataset that does not exist. Raised if none of the child datastores removed the dataset.
- ref :
-
classmethod
setConfigRoot(root: str, config: Config, full: Config, overwrite: bool = True) → None¶ Set any filesystem-dependent config options for child Datastores to be appropriate for a new empty repository with the given root.
Parameters: - root :
str Filesystem path to the root of the data repository.
- config :
Config A
Configto update. Only the subset understood by this component will be updated. Will not expand defaults.- full :
Config A complete config with all defaults expanded that can be converted to a
DatastoreConfig. Read-only and will not be modified by this method. Repository-specific options that should not be obtained from defaults when Butler instances are constructed should be copied fromfulltoconfig.- overwrite :
bool, optional If
False, do not modify a value inconfigif the value already exists. Default is always to overwrite with the providedroot.
Notes
If a keyword is explicitly defined in the supplied
configit will not be overridden by this method ifoverwriteisFalse. This allows explicit values set in external configs to be retained.- root :
-
transaction() → Iterator[lsst.daf.butler.core.datastore.DatastoreTransaction]¶ Context manager supporting
Datastoretransactions.Transactions can be nested, and are to be used in combination with
Registry.transaction.
-
transfer(inputDatastore: Datastore, ref: DatasetRef) → None¶ Retrieve a dataset from an input
Datastore, and store the result in thisDatastore.Parameters: - inputDatastore :
Datastore The external
Datastorefrom which to retreive the Dataset.- ref :
DatasetRef Reference to the required dataset in the input data store.
Returns: - results :
list List containing the return value from the
put()to each child datastore.
- inputDatastore :
-
trash(ref: DatasetRef, ignore_errors: bool = True) → None¶ Indicate to the Datastore that a Dataset can be moved to the trash.
Parameters: - datasetRef :
DatasetRef Reference to the required Dataset.
- ignore_errors :
bool, optional Determine whether errors should be ignored.
Raises: - FileNotFoundError
When Dataset does not exist.
Notes
Some Datastores may implement this method as a silent no-op to disable Dataset deletion through standard interfaces.
- datasetRef :
-
validateConfiguration(entities: Iterable[Union[DatasetRef, DatasetType, StorageClass]], logFailures: bool = False) → None¶ Validate some of the configuration for this datastore.
Parameters: Raises: - DatastoreValidationError
Raised if there is a validation problem with a configuration. All the problems are reported in a single exception.
Notes
This method checks each datastore in turn.
-
validateKey(lookupKey: LookupKey, entity: Union[DatasetRef, DatasetType, StorageClass]) → None¶ Validate a specific look up key with supplied entity.
Parameters: - lookupKey :
LookupKey Key to use to retrieve information from the datastore configuration.
- entity :
DatasetRef,DatasetType, orStorageClass Entity to compare with configuration retrieved using the specified lookup key.
Raises: - DatastoreValidationError
Raised if there is a problem with the combination of entity and lookup key.
Notes
Bypasses the normal selection priorities by allowing a key that would normally not be selected to be validated.
- lookupKey :
- config :