Butler¶
-
class
lsst.daf.butler.
Butler
(config: Union[lsst.daf.butler.core.config.Config, str, None] = None, *, butler: Optional[lsst.daf.butler._butler.Butler] = None, collections: Optional[Any] = None, run: Optional[str] = None, searchPaths: Optional[List[str]] = None, writeable: Optional[bool] = None, inferDefaults: bool = True, **kwargs)¶ Bases:
object
Main entry point for the data access system.
Parameters: - config :
ButlerConfig
,Config
orstr
, optional. Configuration. Anything acceptable to the
ButlerConfig
constructor. If a directory path is given the configuration will be read from abutler.yaml
file in that location. IfNone
is given default values will be used.- butler :
Butler
, optional. If provided, construct a new Butler that uses the same registry and datastore as the given one, but with the given collection and run. Incompatible with the
config
,searchPaths
, andwriteable
arguments.- collections :
str
orIterable
[str
], optional An expression specifying the collections to be searched (in order) when reading datasets. This may be a
str
collection name or an iterable thereof. See Collection expressions for more information. These collections are not registered automatically and must be manually registered before they are used by any method, but they may be manually registered after theButler
is initialized.- run :
str
, optional Name of the
RUN
collection new datasets should be inserted into. Ifcollections
isNone
andrun
is notNone
,collections
will be set to[run]
. If notNone
, this collection will automatically be registered. If this is not set (andwriteable
is not set either), a read-only butler will be created.- searchPaths :
list
ofstr
, optional Directory paths to search when calculating the full Butler configuration. Not used if the supplied config is already a
ButlerConfig
.- writeable :
bool
, optional Explicitly sets whether the butler supports write operations. If not provided, a read-write butler is created if any of
run
,tags
, orchains
is non-empty.- inferDefaults :
bool
, optional If
True
(default) infer default data ID values from the values present in the datasets incollections
: if all collections have the same value (or no value) for a governor dimension, that value will be the default for that dimension. Nonexistent collections are ignored. If a default value is provided explicitly for a governor dimension via**kwargs
, no default will be inferred for that dimension.- **kwargs :
str
Default data ID key-value pairs. These may only identify “governor” dimensions like
instrument
andskymap
.
Examples
While there are many ways to control exactly how a
Butler
interacts with the collections in itsRegistry
, the most common cases are still simple.For a read-only
Butler
that searches one collection, do:butler = Butler("/path/to/repo", collections=["u/alice/DM-50000"])
For a read-write
Butler
that writes to and reads from aRUN
collection:butler = Butler("/path/to/repo", run="u/alice/DM-50000/a")
The
Butler
passed to aPipelineTask
is often much more complex, because we want to write to oneRUN
collection but read from several others (as well):butler = Butler("/path/to/repo", run="u/alice/DM-50000/a", collections=["u/alice/DM-50000/a", "u/bob/DM-49998", "HSC/defaults"])
This butler will
put
new datasets to the runu/alice/DM-50000/a
. Datasets will be read first from that run (since it appears first in the chain), and then fromu/bob/DM-49998
and finallyHSC/defaults
.Finally, one can always create a
Butler
with no collections:butler = Butler("/path/to/repo", writeable=True)
This can be extremely useful when you just want to use
butler.registry
, e.g. for inserting dimension data or managing collections, or when the collections you want to use with the butler are not consistent. Passingwriteable
explicitly here is only necessary if you want to be able to make changes to the repo - usually the value forwriteable
can be guessed from the collection arguments provided, but it defaults toFalse
when there are not collection arguments.Attributes Summary
GENERATION
This is a Generation 3 Butler. collections
The collections to search by default, in order ( CollectionSearch
).run
Name of the run this butler writes outputs to by default ( str
orNone
).Methods Summary
datasetExists
(datasetRefOrType, …)Return True if the Dataset is actually present in the Datastore. export
(*, directory, filename, format, transfer)Export datasets from the repository represented by this Butler
.get
(datasetRefOrType, …)Retrieve a stored dataset. getDeferred
(datasetRefOrType, …)Create a DeferredDatasetHandle
which can later retrieve a dataset, after an immediate registry lookup.getDirect
(ref, *, parameters, Any]] = None)Retrieve a stored dataset. getDirectDeferred
(ref, *, parameters)Create a DeferredDatasetHandle
which can later retrieve a dataset, from a resolvedDatasetRef
.getURI
(datasetRefOrType, …)Return the URI to the Dataset. getURIs
(datasetRefOrType, …)Returns the URIs associated with the dataset. import_
(*, directory, filename, TextIO, …)Import datasets into this repository that were exported from a different butler repository via export
.ingest
(*datasets, transfer, run, …)Store and register one or more datasets that already exist on disk. isWriteable
()Return True
if thisButler
supports write operations.makeRepo
(root, config, str, None] = None, …)Create an empty data repository by adding a butler.yaml config to a repository root directory. pruneCollection
(name, purge, unstore, unlink)Remove a collection and possibly prune datasets within it. pruneDatasets
(refs, *, disassociate, …)Remove one or more datasets from a collection and/or storage. put
(obj, datasetRefOrType, …)Store and register a dataset. removeRuns
(names, unstore)Remove one or more RUN
collections and the datasets within them.retrieveArtifacts
(refs, destination, …)Retrieve the artifacts associated with the supplied refs. transaction
()Context manager supporting Butler
transactions.transfer_from
(source_butler, source_refs, …)Transfer datasets to this Butler from a run in another Butler. validateConfiguration
(logFailures, …)Validate butler configuration. Attributes Documentation
-
GENERATION
= 3¶ This is a Generation 3 Butler.
This attribute may be removed in the future, once the Generation 2 Butler interface has been fully retired; it should only be used in transitional code.
-
collections
¶ The collections to search by default, in order (
CollectionSearch
).This is an alias for
self.registry.defaults.collections
. It cannot be set directly in isolation, but all defaults may be changed together by assigning a newRegistryDefaults
instance toself.registry.defaults
.
-
run
¶ Name of the run this butler writes outputs to by default (
str
orNone
).This is an alias for
self.registry.defaults.run
. It cannot be set directly in isolation, but all defaults may be changed together by assigning a newRegistryDefaults
instance toself.registry.defaults
.
Methods Documentation
-
datasetExists
(datasetRefOrType: Union[lsst.daf.butler.core.datasets.ref.DatasetRef, lsst.daf.butler.core.datasets.type.DatasetType, str], dataId: Union[lsst.daf.butler.core.dimensions._coordinate.DataCoordinate, Mapping[str, Any], None] = None, *, collections: Optional[Any] = None, **kwargs) → bool¶ Return True if the Dataset is actually present in the Datastore.
Parameters: - datasetRefOrType :
DatasetRef
,DatasetType
, orstr
When
DatasetRef
thedataId
should beNone
. Otherwise theDatasetType
or name thereof.- dataId :
dict
orDataCoordinate
A
dict
ofDimension
link name, value pairs that label theDatasetRef
within a Collection. WhenNone
, aDatasetRef
should be provided as the first argument.- collections : Any, optional
Collections to be searched, overriding
self.collections
. Can be any of the types supported by thecollections
argument to butler construction.- **kwargs
Additional keyword arguments used to augment or construct a
DataCoordinate
. SeeDataCoordinate.standardize
parameters.
Raises: - LookupError
Raised if the dataset is not even present in the Registry.
- ValueError
Raised if a resolved
DatasetRef
was passed as an input, but it differs from the one found in the registry.- TypeError
Raised if no collections were provided.
- datasetRefOrType :
-
export
(*, directory: Optional[str] = None, filename: Optional[str] = None, format: Optional[str] = None, transfer: Optional[str] = None) → Iterator[lsst.daf.butler.transfers._context.RepoExportContext]¶ Export datasets from the repository represented by this
Butler
.This method is a context manager that returns a helper object (
RepoExportContext
) that is used to indicate what information from the repository should be exported.Parameters: - directory :
str
, optional Directory dataset files should be written to if
transfer
is notNone
.- filename :
str
, optional Name for the file that will include database information associated with the exported datasets. If this is not an absolute path and
directory
is notNone
, it will be written todirectory
instead of the current working directory. Defaults to “export.{format}”.- format :
str
, optional File format for the database information file. If
None
, the extension offilename
will be used.- transfer :
str
, optional Transfer mode passed to
Datastore.export
.
Raises: - TypeError
Raised if the set of arguments passed is inconsistent.
Examples
Typically the
Registry.queryDataIds
andRegistry.queryDatasets
methods are used to provide the iterables over data IDs and/or datasets to be exported:with butler.export("exports.yaml") as export: # Export all flats, but none of the dimension element rows # (i.e. data ID information) associated with them. export.saveDatasets(butler.registry.queryDatasets("flat"), elements=()) # Export all datasets that start with "deepCoadd_" and all of # their associated data ID information. export.saveDatasets(butler.registry.queryDatasets("deepCoadd_*"))
- directory :
-
get
(datasetRefOrType: Union[lsst.daf.butler.core.datasets.ref.DatasetRef, lsst.daf.butler.core.datasets.type.DatasetType, str], dataId: Union[lsst.daf.butler.core.dimensions._coordinate.DataCoordinate, Mapping[str, Any], None] = None, *, parameters: Optional[Dict[str, Any]] = None, collections: Optional[Any] = None, **kwargs) → Any¶ Retrieve a stored dataset.
Parameters: - datasetRefOrType :
DatasetRef
,DatasetType
, orstr
When
DatasetRef
thedataId
should beNone
. Otherwise theDatasetType
or name thereof.- dataId :
dict
orDataCoordinate
A
dict
ofDimension
link name, value pairs that label theDatasetRef
within a Collection. WhenNone
, aDatasetRef
should be provided as the first argument.- parameters :
dict
Additional StorageClass-defined options to control reading, typically used to efficiently read only a subset of the dataset.
- collections : Any, optional
Collections to be searched, overriding
self.collections
. Can be any of the types supported by thecollections
argument to butler construction.- **kwargs
Additional keyword arguments used to augment or construct a
DataCoordinate
. SeeDataCoordinate.standardize
parameters.
Returns: - obj :
object
The dataset.
Raises: - ValueError
Raised if a resolved
DatasetRef
was passed as an input, but it differs from the one found in the registry.- LookupError
Raised if no matching dataset exists in the
Registry
.- TypeError
Raised if no collections were provided.
Notes
When looking up datasets in a
CALIBRATION
collection, this method requires that the given data ID include temporal dimensions beyond the dimensions of the dataset type itself, in order to find the dataset with the appropriate validity range. For example, a “bias” dataset with native dimensions{instrument, detector}
could be fetched with a{instrument, detector, exposure}
data ID, becauseexposure
is a temporal dimension.- datasetRefOrType :
-
getDeferred
(datasetRefOrType: Union[lsst.daf.butler.core.datasets.ref.DatasetRef, lsst.daf.butler.core.datasets.type.DatasetType, str], dataId: Union[lsst.daf.butler.core.dimensions._coordinate.DataCoordinate, Mapping[str, Any], None] = None, *, parameters: Optional[dict] = None, collections: Optional[Any] = None, **kwargs) → lsst.daf.butler._deferredDatasetHandle.DeferredDatasetHandle¶ Create a
DeferredDatasetHandle
which can later retrieve a dataset, after an immediate registry lookup.Parameters: - datasetRefOrType :
DatasetRef
,DatasetType
, orstr
When
DatasetRef
thedataId
should beNone
. Otherwise theDatasetType
or name thereof.- dataId :
dict
orDataCoordinate
, optional A
dict
ofDimension
link name, value pairs that label theDatasetRef
within a Collection. WhenNone
, aDatasetRef
should be provided as the first argument.- parameters :
dict
Additional StorageClass-defined options to control reading, typically used to efficiently read only a subset of the dataset.
- collections : Any, optional
Collections to be searched, overriding
self.collections
. Can be any of the types supported by thecollections
argument to butler construction.- **kwargs
Additional keyword arguments used to augment or construct a
DataId
. SeeDataId
parameters.
Returns: - obj :
DeferredDatasetHandle
A handle which can be used to retrieve a dataset at a later time.
Raises: - LookupError
Raised if no matching dataset exists in the
Registry
(andallowUnresolved is False
).- ValueError
Raised if a resolved
DatasetRef
was passed as an input, but it differs from the one found in the registry.- TypeError
Raised if no collections were provided.
- datasetRefOrType :
-
getDirect
(ref: lsst.daf.butler.core.datasets.ref.DatasetRef, *, parameters: Optional[Dict[str, Any]] = None) → Any¶ Retrieve a stored dataset.
Unlike
Butler.get
, this method allows datasets outside the Butler’s collection to be read as long as theDatasetRef
that identifies them can be obtained separately.Parameters: - ref :
DatasetRef
Resolved reference to an already stored dataset.
- parameters :
dict
Additional StorageClass-defined options to control reading, typically used to efficiently read only a subset of the dataset.
Returns: - obj :
object
The dataset.
- ref :
-
getDirectDeferred
(ref: lsst.daf.butler.core.datasets.ref.DatasetRef, *, parameters: Optional[dict] = None) → lsst.daf.butler._deferredDatasetHandle.DeferredDatasetHandle¶ Create a
DeferredDatasetHandle
which can later retrieve a dataset, from a resolvedDatasetRef
.Parameters: - ref :
DatasetRef
Resolved reference to an already stored dataset.
- parameters :
dict
Additional StorageClass-defined options to control reading, typically used to efficiently read only a subset of the dataset.
Returns: - obj :
DeferredDatasetHandle
A handle which can be used to retrieve a dataset at a later time.
Raises: - AmbiguousDatasetError
Raised if
ref.id is None
, i.e. the reference is unresolved.
- ref :
-
getURI
(datasetRefOrType: Union[lsst.daf.butler.core.datasets.ref.DatasetRef, lsst.daf.butler.core.datasets.type.DatasetType, str], dataId: Union[lsst.daf.butler.core.dimensions._coordinate.DataCoordinate, Mapping[str, Any], None] = None, *, predict: bool = False, collections: Optional[Any] = None, run: Optional[str] = None, **kwargs) → lsst.daf.butler.core._butlerUri._butlerUri.ButlerURI¶ Return the URI to the Dataset.
Parameters: - datasetRefOrType :
DatasetRef
,DatasetType
, orstr
When
DatasetRef
thedataId
should beNone
. Otherwise theDatasetType
or name thereof.- dataId :
dict
orDataCoordinate
A
dict
ofDimension
link name, value pairs that label theDatasetRef
within a Collection. WhenNone
, aDatasetRef
should be provided as the first argument.- predict :
bool
If
True
, allow URIs to be returned of datasets that have not been written.- collections : Any, optional
Collections to be searched, overriding
self.collections
. Can be any of the types supported by thecollections
argument to butler construction.- run :
str
, optional Run to use for predictions, overriding
self.run
.- **kwargs
Additional keyword arguments used to augment or construct a
DataCoordinate
. SeeDataCoordinate.standardize
parameters.
Returns: - uri :
ButlerURI
URI pointing to the Dataset within the datastore. If the Dataset does not exist in the datastore, and if
predict
isTrue
, the URI will be a prediction and will include a URI fragment “#predicted”. If the datastore does not have entities that relate well to the concept of a URI the returned URI string will be descriptive. The returned URI is not guaranteed to be obtainable.
Raises: - LookupError
A URI has been requested for a dataset that does not exist and guessing is not allowed.
- ValueError
Raised if a resolved
DatasetRef
was passed as an input, but it differs from the one found in the registry.- TypeError
Raised if no collections were provided.
- RuntimeError
Raised if a URI is requested for a dataset that consists of multiple artifacts.
- datasetRefOrType :
-
getURIs
(datasetRefOrType: Union[lsst.daf.butler.core.datasets.ref.DatasetRef, lsst.daf.butler.core.datasets.type.DatasetType, str], dataId: Union[lsst.daf.butler.core.dimensions._coordinate.DataCoordinate, Mapping[str, Any], None] = None, *, predict: bool = False, collections: Optional[Any] = None, run: Optional[str] = None, **kwargs) → Tuple[Optional[lsst.daf.butler.core._butlerUri._butlerUri.ButlerURI], Dict[str, lsst.daf.butler.core._butlerUri._butlerUri.ButlerURI]]¶ Returns the URIs associated with the dataset.
Parameters: - datasetRefOrType :
DatasetRef
,DatasetType
, orstr
When
DatasetRef
thedataId
should beNone
. Otherwise theDatasetType
or name thereof.- dataId :
dict
orDataCoordinate
A
dict
ofDimension
link name, value pairs that label theDatasetRef
within a Collection. WhenNone
, aDatasetRef
should be provided as the first argument.- predict :
bool
If
True
, allow URIs to be returned of datasets that have not been written.- collections : Any, optional
Collections to be searched, overriding
self.collections
. Can be any of the types supported by thecollections
argument to butler construction.- run :
str
, optional Run to use for predictions, overriding
self.run
.- **kwargs
Additional keyword arguments used to augment or construct a
DataCoordinate
. SeeDataCoordinate.standardize
parameters.
Returns: - datasetRefOrType :
-
import_
(*, directory: Optional[str] = None, filename: Union[str, TextIO, None] = None, format: Optional[str] = None, transfer: Optional[str] = None, skip_dimensions: Optional[Set[T]] = None, idGenerationMode: lsst.daf.butler.registry.interfaces._datasets.DatasetIdGenEnum = <DatasetIdGenEnum.UNIQUE: 0>, reuseIds: bool = False) → None¶ Import datasets into this repository that were exported from a different butler repository via
export
.Parameters: - directory :
str
, optional Directory containing dataset files to import from. If
None
,filename
and all dataset file paths specified therein must be absolute.- filename :
str
orTextIO
, optional A stream or name of file that contains database information associated with the exported datasets, typically generated by
export
. If this a string (name) and is not an absolute path, does not exist in the current working directory, anddirectory
is notNone
, it is assumed to be indirectory
. Defaults to “export.{format}”.- format :
str
, optional File format for
filename
. IfNone
, the extension offilename
will be used.- transfer :
str
, optional Transfer mode passed to
ingest
.- skip_dimensions :
set
, optional Names of dimensions that should be skipped and not imported.
- idGenerationMode :
DatasetIdGenEnum
, optional Specifies option for generating dataset IDs when IDs are not provided or their type does not match backend type. By default unique IDs are generated for each inserted dataset.
- reuseIds :
bool
, optional If
True
then forces re-use of imported dataset IDs for integer IDs which are normally generated as auto-incremented; exception will be raised if imported IDs clash with existing ones. This option has no effect on the use of globally-unique IDs which are always re-used (or generated if integer IDs are being imported).
Raises: - TypeError
Raised if the set of arguments passed is inconsistent, or if the butler is read-only.
- directory :
-
ingest
(*datasets, transfer: Optional[str] = 'auto', run: Optional[str] = None, idGenerationMode: lsst.daf.butler.registry.interfaces._datasets.DatasetIdGenEnum = <DatasetIdGenEnum.UNIQUE: 0>) → None¶ Store and register one or more datasets that already exist on disk.
Parameters: - datasets :
FileDataset
Each positional argument is a struct containing information about a file to be ingested, including its URI (either absolute or relative to the datastore root, if applicable), a
DatasetRef
, and optionally a formatter class or its fully-qualified string name. If a formatter is not provided, the formatter that would be used forput
is assumed. On successful return, allFileDataset.ref
attributes will have theirDatasetRef.id
attribute populated and allFileDataset.formatter
attributes will be set to the formatter class used.FileDataset.path
attributes may be modified to put paths in whatever the datastore considers a standardized form.- transfer :
str
, optional If not
None
, must be one of ‘auto’, ‘move’, ‘copy’, ‘direct’, ‘split’, ‘hardlink’, ‘relsymlink’ or ‘symlink’, indicating how to transfer the file.- run :
str
, optional The name of the run ingested datasets should be added to, overriding
self.run
.- idGenerationMode :
DatasetIdGenEnum
, optional Specifies option for generating dataset IDs. By default unique IDs are generated for each inserted dataset.
Raises: - TypeError
Raised if the butler is read-only or if no run was provided.
- NotImplementedError
Raised if the
Datastore
does not support the given transfer mode.- DatasetTypeNotSupportedError
Raised if one or more files to be ingested have a dataset type that is not supported by the
Datastore
..- FileNotFoundError
Raised if one of the given files does not exist.
- FileExistsError
Raised if transfer is not
None
but the (internal) location the file would be moved to is already occupied.
Notes
This operation is not fully exception safe: if a database operation fails, the given
FileDataset
instances may be only partially updated.It is atomic in terms of database operations (they will either all succeed or all fail) providing the database engine implements transactions correctly. It will attempt to be atomic in terms of filesystem operations as well, but this cannot be implemented rigorously for most datastores.
- datasets :
-
static
makeRepo
(root: str, config: Union[lsst.daf.butler.core.config.Config, str, None] = None, dimensionConfig: Union[lsst.daf.butler.core.config.Config, str, None] = None, standalone: bool = False, searchPaths: Optional[List[str]] = None, forceConfigRoot: bool = True, outfile: Optional[str] = None, overwrite: bool = False) → lsst.daf.butler.core.config.Config¶ Create an empty data repository by adding a butler.yaml config to a repository root directory.
Parameters: - root :
str
orButlerURI
Path or URI to the root location of the new repository. Will be created if it does not exist.
- config :
Config
orstr
, optional Configuration to write to the repository, after setting any root-dependent Registry or Datastore config options. Can not be a
ButlerConfig
or aConfigSubset
. IfNone
, default configuration will be used. Root-dependent config options specified in this config are overwritten ifforceConfigRoot
isTrue
.- dimensionConfig :
Config
orstr
, optional Configuration for dimensions, will be used to initialize registry database.
- standalone :
bool
If True, write all expanded defaults, not just customized or repository-specific settings. This (mostly) decouples the repository from the default configuration, insulating it from changes to the defaults (which may be good or bad, depending on the nature of the changes). Future additions to the defaults will still be picked up when initializing
Butlers
to repos created withstandalone=True
.- searchPaths :
list
ofstr
, optional Directory paths to search when calculating the full butler configuration.
- forceConfigRoot :
bool
, optional If
False
, any values present in the suppliedconfig
that would normally be reset are not overridden and will appear directly in the output config. This allows non-standard overrides of the root directory for a datastore or registry to be given. If this parameter isTrue
the values forroot
will be forced into the resulting config if appropriate.- outfile :
str
, optional If not-
None
, the output configuration will be written to this location rather than into the repository itself. Can be a URI string. Can refer to a directory that will be used to writebutler.yaml
.- overwrite :
bool
, optional Create a new configuration file even if one already exists in the specified output location. Default is to raise an exception.
Returns: Raises: - ValueError
Raised if a ButlerConfig or ConfigSubset is passed instead of a regular Config (as these subclasses would make it impossible to support
standalone=False
).- FileExistsError
Raised if the output config file already exists.
- os.error
Raised if the directory does not exist, exists but is not a directory, or cannot be created.
Notes
Note that when
standalone=False
(the default), the configuration search path (seeConfigSubset.defaultSearchPaths
) that was used to construct the repository should also be used to construct any Butlers to avoid configuration inconsistencies.- root :
-
pruneCollection
(name: str, purge: bool = False, unstore: bool = False, unlink: Optional[List[str]] = None) → None¶ Remove a collection and possibly prune datasets within it.
Parameters: - name :
str
Name of the collection to remove. If this is a
TAGGED
orCHAINED
collection, datasets within the collection are not modified unlessunstore
isTrue
. If this is aRUN
collection,purge
andunstore
must beTrue
, and all datasets in it are fully removed from the data repository.- purge :
bool
, optional If
True
, permitRUN
collections to be removed, fully removing datasets within them. Requiresunstore=True
as well as an added precaution against accidental deletion. Must beFalse
(default) if the collection is not aRUN
.- unstore: `bool`, optional
If
True
, remove all datasets in the collection from all datastores in which they appear.- unlink: `list` [`str`], optional
Before removing the given
collection
unlink it from from these parent collections.
Raises: - TypeError
Raised if the butler is read-only or arguments are mutually inconsistent.
- name :
-
pruneDatasets
(refs: Iterable[lsst.daf.butler.core.datasets.ref.DatasetRef], *, disassociate: bool = True, unstore: bool = False, tags: Iterable[str] = (), purge: bool = False, run: Optional[str] = None) → None¶ Remove one or more datasets from a collection and/or storage.
Parameters: - refs :
Iterable
ofDatasetRef
Datasets to prune. These must be “resolved” references (not just a
DatasetType
and data ID).- disassociate :
bool
, optional Disassociate pruned datasets from
tags
, or from all collections ifpurge=True
.- unstore :
bool
, optional If
True
(False
is default) remove these datasets from all datastores known to this butler. Note that this will make it impossible to retrieve these datasets even via other collections. Datasets that are already not stored are ignored by this option.- tags :
Iterable
[str
], optional TAGGED
collections to disassociate the datasets from. Ignored ifdisassociate
isFalse
orpurge
isTrue
.- purge :
bool
, optional If
True
(False
is default), completely remove the dataset from theRegistry
. To prevent accidental deletions,purge
may only beTrue
if all of the following conditions are met:This mode may remove provenance information from datasets other than those provided, and should be used with extreme care.
Raises: - TypeError
Raised if the butler is read-only, if no collection was provided, or the conditions for
purge=True
were not met.
- refs :
-
put
(obj: Any, datasetRefOrType: Union[lsst.daf.butler.core.datasets.ref.DatasetRef, lsst.daf.butler.core.datasets.type.DatasetType, str], dataId: Union[lsst.daf.butler.core.dimensions._coordinate.DataCoordinate, Mapping[str, Any], None] = None, *, run: Optional[str] = None, **kwargs) → lsst.daf.butler.core.datasets.ref.DatasetRef¶ Store and register a dataset.
Parameters: - obj :
object
The dataset.
- datasetRefOrType :
DatasetRef
,DatasetType
, orstr
When
DatasetRef
is provided,dataId
should beNone
. Otherwise theDatasetType
or name thereof.- dataId :
dict
orDataCoordinate
A
dict
ofDimension
link name, value pairs that label theDatasetRef
within a Collection. WhenNone
, aDatasetRef
should be provided as the second argument.- run :
str
, optional The name of the run the dataset should be added to, overriding
self.run
.- **kwargs
Additional keyword arguments used to augment or construct a
DataCoordinate
. SeeDataCoordinate.standardize
parameters.
Returns: - ref :
DatasetRef
A reference to the stored dataset, updated with the correct id if given.
Raises: - TypeError
Raised if the butler is read-only or if no run has been provided.
- obj :
-
removeRuns
(names: Iterable[str], unstore: bool = True) → None¶ Remove one or more
RUN
collections and the datasets within them.Parameters: - names :
Iterable
[str
] The names of the collections to remove.
- unstore :
bool
, optional If
True
(default), delete datasets from all datastores in which they are present, and attempt to rollback the registry deletions if datastore deletions fail (which may not always be possible). IfFalse
, datastore records for these datasets are still removed, but any artifacts (e.g. files) will not be.
Raises: - TypeError
Raised if one or more collections are not of type
RUN
.
- names :
-
retrieveArtifacts
(refs: Iterable[lsst.daf.butler.core.datasets.ref.DatasetRef], destination: Union[str, lsst.daf.butler.core._butlerUri._butlerUri.ButlerURI], transfer: str = 'auto', preserve_path: bool = True, overwrite: bool = False) → List[lsst.daf.butler.core._butlerUri._butlerUri.ButlerURI]¶ Retrieve the artifacts associated with the supplied refs.
Parameters: - refs : iterable of
DatasetRef
The datasets for which artifacts are to be retrieved. A single ref can result in multiple artifacts. The refs must be resolved.
- destination :
ButlerURI
orstr
Location to write the artifacts.
- transfer :
str
, optional Method to use to transfer the artifacts. Must be one of the options supported by
ButlerURI.transfer_from()
. “move” is not allowed.- preserve_path :
bool
, optional If
True
the full path of the artifact within the datastore is preserved. IfFalse
the final file component of the path is used.- overwrite :
bool
, optional If
True
allow transfers to overwrite existing files at the destination.
Returns: Notes
For non-file datastores the artifacts written to the destination may not match the representation inside the datastore. For example a hierarchical data structure in a NoSQL database may well be stored as a JSON file.
- refs : iterable of
-
transaction
() → Iterator[None]¶ Context manager supporting
Butler
transactions.Transactions can be nested.
-
transfer_from
(source_butler: lsst.daf.butler._butler.Butler, source_refs: Iterable[lsst.daf.butler.core.datasets.ref.DatasetRef], transfer: str = 'auto', id_gen_map: Optional[Dict[str, lsst.daf.butler.registry.interfaces._datasets.DatasetIdGenEnum]] = None, skip_missing: bool = True) → List[lsst.daf.butler.core.datasets.ref.DatasetRef]¶ Transfer datasets to this Butler from a run in another Butler.
Parameters: - source_butler :
Butler
Butler from which the datasets are to be transferred.
- source_refs : iterable of
DatasetRef
Datasets defined in the source butler that should be transferred to this butler.
- transfer :
str
, optional Transfer mode passed to
transfer_from
.- id_gen_map :
dict
of [str
,DatasetIdGenEnum
], optional A mapping of dataset type to ID generation mode. Only used if the source butler is using integer IDs. Should not be used if this receiving butler uses integer IDs. Without this dataset import always uses unique.
- skip_missing :
bool
If
True
, datasets with no datastore artifact associated with them are not transferred. IfFalse
a registry entry will be created even if no datastore record is created (and so will look equivalent to the dataset being unstored).
Returns: - refs :
list
ofDatasetRef
The refs added to this Butler.
Notes
Requires that any dimension definitions are already present in the receiving Butler. The datastore artifact has to exist for a transfer to be made but non-existence is not an error.
Datasets that already exist in this run will be skipped.
The datasets are imported as part of a transaction, although dataset types are registered before the transaction is started. This means that it is possible for a dataset type to be registered even though transfer has failed.
- source_butler :
-
validateConfiguration
(logFailures: bool = False, datasetTypeNames: Optional[Iterable[str]] = None, ignore: Optional[Iterable[str]] = None) → None¶ Validate butler configuration.
Checks that each
DatasetType
can be stored in theDatastore
.Parameters: - logFailures :
bool
, optional If
True
, output a log message for every validation error detected.- datasetTypeNames : iterable of
str
, optional The
DatasetType
names that should be checked. This allows only a subset to be selected.- ignore : iterable of
str
, optional Names of DatasetTypes to skip over. This can be used to skip known problems. If a named
DatasetType
corresponds to a composite, all components of thatDatasetType
will also be ignored.
Raises: - ButlerValidationError
Raised if there is some inconsistency with how this Butler is configured.
- logFailures :
- config :