Butler¶
- class lsst.daf.butler.Butler(config: Config | str | None = None, *, butler: Butler | None = None, collections: Any | None = None, run: str | None = None, searchPaths: List[str] | None = None, writeable: bool | None = None, inferDefaults: bool = True, **kwargs: str)¶
- Bases: - LimitedButler- Main entry point for the data access system. - Parameters:
- configButlerConfig,Configorstr, optional.
- Configuration. Anything acceptable to the - ButlerConfigconstructor. If a directory path is given the configuration will be read from a- butler.yamlfile in that location. If- Noneis given default values will be used.
- butlerButler, optional.
- If provided, construct a new Butler that uses the same registry and datastore as the given one, but with the given collection and run. Incompatible with the - config,- searchPaths, and- writeablearguments.
- collectionsstrorIterable[str], optional
- An expression specifying the collections to be searched (in order) when reading datasets. This may be a - strcollection name or an iterable thereof. See Collection expressions for more information. These collections are not registered automatically and must be manually registered before they are used by any method, but they may be manually registered after the- Butleris initialized.
- runstr, optional
- Name of the - RUNcollection new datasets should be inserted into. If- collectionsis- Noneand- runis not- None,- collectionswill be set to- [run]. If not- None, this collection will automatically be registered. If this is not set (and- writeableis not set either), a read-only butler will be created.
- searchPathslistofstr, optional
- Directory paths to search when calculating the full Butler configuration. Not used if the supplied config is already a - ButlerConfig.
- writeablebool, optional
- Explicitly sets whether the butler supports write operations. If not provided, a read-write butler is created if any of - run,- tags, or- chainsis non-empty.
- inferDefaultsbool, optional
- If - True(default) infer default data ID values from the values present in the datasets in- collections: if all collections have the same value (or no value) for a governor dimension, that value will be the default for that dimension. Nonexistent collections are ignored. If a default value is provided explicitly for a governor dimension via- **kwargs, no default will be inferred for that dimension.
- **kwargsstr
- Default data ID key-value pairs. These may only identify “governor” dimensions like - instrumentand- skymap.
 
- config
 - Examples - While there are many ways to control exactly how a - Butlerinteracts with the collections in its- Registry, the most common cases are still simple.- For a read-only - Butlerthat searches one collection, do:- butler = Butler("/path/to/repo", collections=["u/alice/DM-50000"]) - For a read-write - Butlerthat writes to and reads from a- RUNcollection:- butler = Butler("/path/to/repo", run="u/alice/DM-50000/a") - The - Butlerpassed to a- PipelineTaskis often much more complex, because we want to write to one- RUNcollection but read from several others (as well):- butler = Butler("/path/to/repo", run="u/alice/DM-50000/a", collections=["u/alice/DM-50000/a", "u/bob/DM-49998", "HSC/defaults"]) - This butler will - putnew datasets to the run- u/alice/DM-50000/a. Datasets will be read first from that run (since it appears first in the chain), and then from- u/bob/DM-49998and finally- HSC/defaults.- Finally, one can always create a - Butlerwith no collections:- butler = Butler("/path/to/repo", writeable=True) - This can be extremely useful when you just want to use - butler.registry, e.g. for inserting dimension data or managing collections, or when the collections you want to use with the butler are not consistent. Passing- writeableexplicitly here is only necessary if you want to be able to make changes to the repo - usually the value for- writeablecan be guessed from the collection arguments provided, but it defaults to- Falsewhen there are not collection arguments.- Attributes Summary - This is a Generation 3 Butler. - The collections to search by default, in order ( - Sequence[- str]).- Structure managing all dimensions recognized by this data repository ( - DimensionUniverse).- Name of the run this butler writes outputs to by default ( - stror- None).- Methods Summary - datasetExists(datasetRefOrType[, dataId, ...])- Return True if the Dataset is actually present in the Datastore. - datasetExistsDirect(ref)- Return - Trueif a dataset is actually present in the Datastore.- export(*[, directory, filename, format, ...])- Export datasets from the repository represented by this - Butler.- get(datasetRefOrType, /[, dataId, ...])- Retrieve a stored dataset. - getDeferred(datasetRefOrType, /[, dataId, ...])- Create a - DeferredDatasetHandlewhich can later retrieve a dataset, after an immediate registry lookup.- getDirect(ref, *[, parameters, storageClass])- Retrieve a stored dataset. - getDirectDeferred(ref, *[, parameters, ...])- Create a - DeferredDatasetHandlewhich can later retrieve a dataset,- getURI(datasetRefOrType, /[, dataId, ...])- Return the URI to the Dataset. - getURIs(datasetRefOrType, /[, dataId, ...])- Returns the URIs associated with the dataset. - Retrieve the list of known repository labels. - get_repo_uri(label)- Look up the label in a butler repository index. - import_(*[, directory, filename, format, ...])- Import datasets into this repository that were exported from a different butler repository via - export.- ingest(*datasets[, transfer, run, ...])- Store and register one or more datasets that already exist on disk. - makeRepo(root[, config, dimensionConfig, ...])- Create an empty data repository by adding a butler.yaml config to a repository root directory. - markInputUnused(ref)- Indicate that a predicted input was not actually used when processing a - Quantum.- pruneDatasets(refs, *[, disassociate, ...])- Remove one or more datasets from a collection and/or storage. - put(obj, datasetRefOrType, /[, dataId, run])- Store and register a dataset. - putDirect(obj, ref, /)- Deprecated since version v26.0. - removeRuns(names[, unstore])- Remove one or more - RUNcollections and the datasets within them.- retrieveArtifacts(refs, destination[, ...])- Retrieve the artifacts associated with the supplied refs. - Context manager supporting - Butlertransactions.- transfer_from(source_butler, source_refs[, ...])- Transfer datasets to this Butler from a run in another Butler. - validateConfiguration([logFailures, ...])- Validate butler configuration. - Attributes Documentation - GENERATION: ClassVar[int] = 3¶
- This is a Generation 3 Butler. - This attribute may be removed in the future, once the Generation 2 Butler interface has been fully retired; it should only be used in transitional code. 
 - collections¶
- The collections to search by default, in order ( - Sequence[- str]).- This is an alias for - self.registry.defaults.collections. It cannot be set directly in isolation, but all defaults may be changed together by assigning a new- RegistryDefaultsinstance to- self.registry.defaults.
 - dimensions¶
 - run¶
- Name of the run this butler writes outputs to by default ( - stror- None).- This is an alias for - self.registry.defaults.run. It cannot be set directly in isolation, but all defaults may be changed together by assigning a new- RegistryDefaultsinstance to- self.registry.defaults.
 - Methods Documentation - datasetExists(datasetRefOrType: DatasetRef | DatasetType | str, dataId: DataCoordinate | Mapping[str, Any] | None = None, *, collections: Any | None = None, **kwargs: Any) bool¶
- Return True if the Dataset is actually present in the Datastore. - Parameters:
- datasetRefOrTypeDatasetRef,DatasetType, orstr
- When - DatasetRefthe- dataIdshould be- None. Otherwise the- DatasetTypeor name thereof.
- dataIddictorDataCoordinate
- A - dictof- Dimensionlink name, value pairs that label the- DatasetRefwithin a Collection. When- None, a- DatasetRefshould be provided as the first argument.
- collectionsAny, optional
- Collections to be searched, overriding - self.collections. Can be any of the types supported by the- collectionsargument to butler construction.
- **kwargs
- Additional keyword arguments used to augment or construct a - DataCoordinate. See- DataCoordinate.standardizeparameters.
 
- datasetRefOrType
- Raises:
- LookupError
- Raised if the dataset is not even present in the Registry. 
- ValueError
- Raised if a resolved - DatasetRefwas passed as an input, but it differs from the one found in the registry.
- TypeError
- Raised if no collections were provided. 
 
 
 - datasetExistsDirect(ref: DatasetRef) bool¶
- Return - Trueif a dataset is actually present in the Datastore.- Parameters:
- refDatasetRef
- Resolved reference to a dataset. 
 
- ref
- Returns:
- existsbool
- Whether the dataset exists in the Datastore. 
 
- exists
 
 - export(*, directory: str | None = None, filename: str | None = None, format: str | None = None, transfer: str | None = None) Iterator[RepoExportContext]¶
- Export datasets from the repository represented by this - Butler.- This method is a context manager that returns a helper object ( - RepoExportContext) that is used to indicate what information from the repository should be exported.- Parameters:
- directorystr, optional
- Directory dataset files should be written to if - transferis not- None.
- filenamestr, optional
- Name for the file that will include database information associated with the exported datasets. If this is not an absolute path and - directoryis not- None, it will be written to- directoryinstead of the current working directory. Defaults to “export.{format}”.
- formatstr, optional
- File format for the database information file. If - None, the extension of- filenamewill be used.
- transferstr, optional
- Transfer mode passed to - Datastore.export.
 
- directory
- Raises:
- TypeError
- Raised if the set of arguments passed is inconsistent. 
 
 - Examples - Typically the - Registry.queryDataIdsand- Registry.queryDatasetsmethods are used to provide the iterables over data IDs and/or datasets to be exported:- with butler.export("exports.yaml") as export: # Export all flats, but none of the dimension element rows # (i.e. data ID information) associated with them. export.saveDatasets(butler.registry.queryDatasets("flat"), elements=()) # Export all datasets that start with "deepCoadd_" and all of # their associated data ID information. export.saveDatasets(butler.registry.queryDatasets("deepCoadd_*")) 
 - get(datasetRefOrType: DatasetRef | DatasetType | str, /, dataId: DataCoordinate | Mapping[str, Any] | None = None, *, parameters: Dict[str, Any] | None = None, collections: Any | None = None, storageClass: StorageClass | str | None = None, **kwargs: Any) Any¶
- Retrieve a stored dataset. - Parameters:
- datasetRefOrTypeDatasetRef,DatasetType, orstr
- When - DatasetRefthe- dataIdshould be- None. Otherwise the- DatasetTypeor name thereof. If a resolved- DatasetRef, the associated dataset is returned directly without additional querying.
- dataIddictorDataCoordinate
- A - dictof- Dimensionlink name, value pairs that label the- DatasetRefwithin a Collection. When- None, a- DatasetRefshould be provided as the first argument.
- parametersdict
- Additional StorageClass-defined options to control reading, typically used to efficiently read only a subset of the dataset. 
- collectionsAny, optional
- Collections to be searched, overriding - self.collections. Can be any of the types supported by the- collectionsargument to butler construction.
- storageClassStorageClassorstr, optional
- The storage class to be used to override the Python type returned by this method. By default the returned type matches the dataset type definition for this dataset. Specifying a read - StorageClasscan force a different type to be returned. This type must be compatible with the original type.
- **kwargs
- Additional keyword arguments used to augment or construct a - DataCoordinate. See- DataCoordinate.standardizeparameters.
 
- datasetRefOrType
- Returns:
- objobject
- The dataset. 
 
- obj
- Raises:
- LookupError
- Raised if no matching dataset exists in the - Registry.
- TypeError
- Raised if no collections were provided. 
 
 - Notes - When looking up datasets in a - CALIBRATIONcollection, this method requires that the given data ID include temporal dimensions beyond the dimensions of the dataset type itself, in order to find the dataset with the appropriate validity range. For example, a “bias” dataset with native dimensions- {instrument, detector}could be fetched with a- {instrument, detector, exposure}data ID, because- exposureis a temporal dimension.
 - getDeferred(datasetRefOrType: DatasetRef | DatasetType | str, /, dataId: DataCoordinate | Mapping[str, Any] | None = None, *, parameters: dict | None = None, collections: Any | None = None, storageClass: StorageClass | str | None = None, **kwargs: Any) DeferredDatasetHandle¶
- Create a - DeferredDatasetHandlewhich can later retrieve a dataset, after an immediate registry lookup.- Parameters:
- datasetRefOrTypeDatasetRef,DatasetType, orstr
- When - DatasetRefthe- dataIdshould be- None. Otherwise the- DatasetTypeor name thereof.
- dataIddictorDataCoordinate, optional
- A - dictof- Dimensionlink name, value pairs that label the- DatasetRefwithin a Collection. When- None, a- DatasetRefshould be provided as the first argument.
- parametersdict
- Additional StorageClass-defined options to control reading, typically used to efficiently read only a subset of the dataset. 
- collectionsAny, optional
- Collections to be searched, overriding - self.collections. Can be any of the types supported by the- collectionsargument to butler construction.
- storageClassStorageClassorstr, optional
- The storage class to be used to override the Python type returned by this method. By default the returned type matches the dataset type definition for this dataset. Specifying a read - StorageClasscan force a different type to be returned. This type must be compatible with the original type.
- **kwargs
- Additional keyword arguments used to augment or construct a - DataId. See- DataIdparameters.
 
- datasetRefOrType
- Returns:
- objDeferredDatasetHandle
- A handle which can be used to retrieve a dataset at a later time. 
 
- obj
- Raises:
- LookupError
- Raised if no matching dataset exists in the - Registry(and- allowUnresolved is False).
- ValueError
- Raised if a resolved - DatasetRefwas passed as an input, but it differs from the one found in the registry.
- TypeError
- Raised if no collections were provided. 
 
 
 - getDirect(ref: DatasetRef, *, parameters: Dict[str, Any] | None = None, storageClass: StorageClass | str | None = None) Any¶
- Retrieve a stored dataset. - Parameters:
- refDatasetRef
- Resolved reference to an already stored dataset. 
- parametersdict
- Additional StorageClass-defined options to control reading, typically used to efficiently read only a subset of the dataset. 
- storageClassStorageClassorstr, optional
- The storage class to be used to override the Python type returned by this method. By default the returned type matches the dataset type definition for this dataset. Specifying a read - StorageClasscan force a different type to be returned. This type must be compatible with the original type.
 
- ref
- Returns:
- objobject
- The dataset. 
 - Deprecated since version v26.0: Butler.get() now behaves like Butler.getDirect() when given a DatasetRef. Please use Butler.get(). Will be removed after v27.0. 
- obj
 
 - getDirectDeferred(ref: DatasetRef, *, parameters: dict | None = None, storageClass: StorageClass | str | None = None) DeferredDatasetHandle¶
- Create a DeferredDatasetHandlewhich can later retrieve a dataset,
- from a resolved - DatasetRef.
 - Parameters:
- refDatasetRef
- Resolved reference to an already stored dataset. 
- parametersdict
- Additional StorageClass-defined options to control reading, typically used to efficiently read only a subset of the dataset. 
- storageClassStorageClassorstr, optional
- The storage class to be used to override the Python type returned by this method. By default the returned type matches the dataset type definition for this dataset. Specifying a read - StorageClasscan force a different type to be returned. This type must be compatible with the original type.
 
- ref
- Returns:
- objDeferredDatasetHandle
- A handle which can be used to retrieve a dataset at a later time. 
 
- obj
- Raises:
- AmbiguousDatasetError
- Raised if - ref.id is None, i.e. the reference is unresolved.
 - Deprecated since version v26.0: Butler.getDeferred() now behaves like getDirectDeferred() when given a DatasetRef. Please use Butler.getDeferred(). Will be removed after v27.0. 
 
- Create a 
 - getURI(datasetRefOrType: DatasetRef | DatasetType | str, /, dataId: DataCoordinate | Mapping[str, Any] | None = None, *, predict: bool = False, collections: Any | None = None, run: str | None = None, **kwargs: Any) ResourcePath¶
- Return the URI to the Dataset. - Parameters:
- datasetRefOrTypeDatasetRef,DatasetType, orstr
- When - DatasetRefthe- dataIdshould be- None. Otherwise the- DatasetTypeor name thereof.
- dataIddictorDataCoordinate
- A - dictof- Dimensionlink name, value pairs that label the- DatasetRefwithin a Collection. When- None, a- DatasetRefshould be provided as the first argument.
- predictbool
- If - True, allow URIs to be returned of datasets that have not been written.
- collectionsAny, optional
- Collections to be searched, overriding - self.collections. Can be any of the types supported by the- collectionsargument to butler construction.
- runstr, optional
- Run to use for predictions, overriding - self.run.
- **kwargs
- Additional keyword arguments used to augment or construct a - DataCoordinate. See- DataCoordinate.standardizeparameters.
 
- datasetRefOrType
- Returns:
- urilsst.resources.ResourcePath
- URI pointing to the Dataset within the datastore. If the Dataset does not exist in the datastore, and if - predictis- True, the URI will be a prediction and will include a URI fragment “#predicted”. If the datastore does not have entities that relate well to the concept of a URI the returned URI string will be descriptive. The returned URI is not guaranteed to be obtainable.
 
- uri
- Raises:
- LookupError
- A URI has been requested for a dataset that does not exist and guessing is not allowed. 
- ValueError
- Raised if a resolved - DatasetRefwas passed as an input, but it differs from the one found in the registry.
- TypeError
- Raised if no collections were provided. 
- RuntimeError
- Raised if a URI is requested for a dataset that consists of multiple artifacts. 
 
 
 - getURIs(datasetRefOrType: DatasetRef | DatasetType | str, /, dataId: DataCoordinate | Mapping[str, Any] | None = None, *, predict: bool = False, collections: Any | None = None, run: str | None = None, **kwargs: Any) DatasetRefURIs¶
- Returns the URIs associated with the dataset. - Parameters:
- datasetRefOrTypeDatasetRef,DatasetType, orstr
- When - DatasetRefthe- dataIdshould be- None. Otherwise the- DatasetTypeor name thereof.
- dataIddictorDataCoordinate
- A - dictof- Dimensionlink name, value pairs that label the- DatasetRefwithin a Collection. When- None, a- DatasetRefshould be provided as the first argument.
- predictbool
- If - True, allow URIs to be returned of datasets that have not been written.
- collectionsAny, optional
- Collections to be searched, overriding - self.collections. Can be any of the types supported by the- collectionsargument to butler construction.
- runstr, optional
- Run to use for predictions, overriding - self.run.
- **kwargs
- Additional keyword arguments used to augment or construct a - DataCoordinate. See- DataCoordinate.standardizeparameters.
 
- datasetRefOrType
- Returns:
- urisDatasetRefURIs
- The URI to the primary artifact associated with this dataset (if the dataset was disassembled within the datastore this may be - None), and the URIs to any components associated with the dataset artifact. (can be empty if there are no components).
 
- uris
 
 - classmethod get_known_repos() Set[str]¶
- Retrieve the list of known repository labels. - Notes - See - ButlerRepoIndexfor details on how the information is discovered.
 - classmethod get_repo_uri(label: str) ResourcePath¶
- Look up the label in a butler repository index. - Parameters:
- labelstr
- Label of the Butler repository to look up. 
 
- label
- Returns:
- urilsst.resources.ResourcePath
- URI to the Butler repository associated with the given label. 
 
- uri
- Raises:
- KeyError
- Raised if the label is not found in the index, or if an index can not be found at all. 
 
 - Notes - See - ButlerRepoIndexfor details on how the information is discovered.
 - import_(*, directory: str | ParseResult | ResourcePath | Path | None = None, filename: str | ParseResult | ResourcePath | Path | TextIO | None = None, format: str | None = None, transfer: str | None = None, skip_dimensions: Set | None = None, idGenerationMode: DatasetIdGenEnum = DatasetIdGenEnum.UNIQUE, reuseIds: bool = False) None¶
- Import datasets into this repository that were exported from a different butler repository via - export.- Parameters:
- directoryResourcePathExpression, optional
- Directory containing dataset files to import from. If - None,- filenameand all dataset file paths specified therein must be absolute.
- filenameResourcePathExpressionorTextIO
- A stream or name of file that contains database information associated with the exported datasets, typically generated by - export. If this a string (name) or- ResourcePathand is not an absolute path, it will first be looked for relative to- directoryand if not found there it will be looked for in the current working directory. Defaults to “export.{format}”.
- formatstr, optional
- File format for - filename. If- None, the extension of- filenamewill be used.
- transferstr, optional
- Transfer mode passed to - ingest.
- skip_dimensionsset, optional
- Names of dimensions that should be skipped and not imported. 
- idGenerationModeDatasetIdGenEnum, optional
- Specifies option for generating dataset IDs when IDs are not provided or their type does not match backend type. By default unique IDs are generated for each inserted dataset. 
- reuseIdsbool, optional
- If - Truethen forces re-use of imported dataset IDs for integer IDs which are normally generated as auto-incremented; exception will be raised if imported IDs clash with existing ones. This option has no effect on the use of globally-unique IDs which are always re-used (or generated if integer IDs are being imported).
 
- directory
- Raises:
- TypeError
- Raised if the set of arguments passed is inconsistent, or if the butler is read-only. 
 
 
 - ingest(*datasets: FileDataset, transfer: str | None = 'auto', run: str | None = None, idGenerationMode: DatasetIdGenEnum = DatasetIdGenEnum.UNIQUE, record_validation_info: bool = True) None¶
- Store and register one or more datasets that already exist on disk. - Parameters:
- datasetsFileDataset
- Each positional argument is a struct containing information about a file to be ingested, including its URI (either absolute or relative to the datastore root, if applicable), a - DatasetRef, and optionally a formatter class or its fully-qualified string name. If a formatter is not provided, the formatter that would be used for- putis assumed. On successful return, all- FileDataset.refattributes will have their- DatasetRef.idattribute populated and all- FileDataset.formatterattributes will be set to the formatter class used.- FileDataset.pathattributes may be modified to put paths in whatever the datastore considers a standardized form.
- transferstr, optional
- If not - None, must be one of ‘auto’, ‘move’, ‘copy’, ‘direct’, ‘split’, ‘hardlink’, ‘relsymlink’ or ‘symlink’, indicating how to transfer the file.
- runstr, optional
- The name of the run ingested datasets should be added to, overriding - self.run.
- idGenerationModeDatasetIdGenEnum, optional
- Specifies option for generating dataset IDs. By default unique IDs are generated for each inserted dataset. 
- record_validation_infobool, optional
- If - True, the default, the datastore can record validation information associated with the file. If- Falsethe datastore will not attempt to track any information such as checksums or file sizes. This can be useful if such information is tracked in an external system or if the file is to be compressed in place. It is up to the datastore whether this parameter is relevant.
 
- datasets
- Raises:
- TypeError
- Raised if the butler is read-only or if no run was provided. 
- NotImplementedError
- Raised if the - Datastoredoes not support the given transfer mode.
- DatasetTypeNotSupportedError
- Raised if one or more files to be ingested have a dataset type that is not supported by the - Datastore..
- FileNotFoundError
- Raised if one of the given files does not exist. 
- FileExistsError
- Raised if transfer is not - Nonebut the (internal) location the file would be moved to is already occupied.
 
 - Notes - This operation is not fully exception safe: if a database operation fails, the given - FileDatasetinstances may be only partially updated.- It is atomic in terms of database operations (they will either all succeed or all fail) providing the database engine implements transactions correctly. It will attempt to be atomic in terms of filesystem operations as well, but this cannot be implemented rigorously for most datastores. 
 - static makeRepo(root: str | ParseResult | ResourcePath | Path, config: Config | str | None = None, dimensionConfig: Config | str | None = None, standalone: bool = False, searchPaths: List[str] | None = None, forceConfigRoot: bool = True, outfile: str | ParseResult | ResourcePath | Path | None = None, overwrite: bool = False) Config¶
- Create an empty data repository by adding a butler.yaml config to a repository root directory. - Parameters:
- rootlsst.resources.ResourcePathExpression
- Path or URI to the root location of the new repository. Will be created if it does not exist. 
- configConfigorstr, optional
- Configuration to write to the repository, after setting any root-dependent Registry or Datastore config options. Can not be a - ButlerConfigor a- ConfigSubset. If- None, default configuration will be used. Root-dependent config options specified in this config are overwritten if- forceConfigRootis- True.
- dimensionConfigConfigorstr, optional
- Configuration for dimensions, will be used to initialize registry database. 
- standalonebool
- If True, write all expanded defaults, not just customized or repository-specific settings. This (mostly) decouples the repository from the default configuration, insulating it from changes to the defaults (which may be good or bad, depending on the nature of the changes). Future additions to the defaults will still be picked up when initializing - Butlersto repos created with- standalone=True.
- searchPathslistofstr, optional
- Directory paths to search when calculating the full butler configuration. 
- forceConfigRootbool, optional
- If - False, any values present in the supplied- configthat would normally be reset are not overridden and will appear directly in the output config. This allows non-standard overrides of the root directory for a datastore or registry to be given. If this parameter is- Truethe values for- rootwill be forced into the resulting config if appropriate.
- outfilelss.resources.ResourcePathExpression, optional
- If not- - None, the output configuration will be written to this location rather than into the repository itself. Can be a URI string. Can refer to a directory that will be used to write- butler.yaml.
- overwritebool, optional
- Create a new configuration file even if one already exists in the specified output location. Default is to raise an exception. 
 
- root
- Returns:
- Raises:
- ValueError
- Raised if a ButlerConfig or ConfigSubset is passed instead of a regular Config (as these subclasses would make it impossible to support - standalone=False).
- FileExistsError
- Raised if the output config file already exists. 
- os.error
- Raised if the directory does not exist, exists but is not a directory, or cannot be created. 
 
 - Notes - Note that when - standalone=False(the default), the configuration search path (see- ConfigSubset.defaultSearchPaths) that was used to construct the repository should also be used to construct any Butlers to avoid configuration inconsistencies.
 - markInputUnused(ref: DatasetRef) None¶
- Indicate that a predicted input was not actually used when processing a - Quantum.- Parameters:
- refDatasetRef
- Reference to the unused dataset. 
 
- ref
 - Notes - By default, a dataset is considered “actually used” if it is accessed via - getDirector a handle to it is obtained via- getDirectDeferred(even if the handle is not used). This method must be called after one of those in order to remove the dataset from the actual input list.- This method does nothing for butlers that do not store provenance information (which is the default implementation provided by the base class). 
 - pruneDatasets(refs: Iterable[DatasetRef], *, disassociate: bool = True, unstore: bool = False, tags: Iterable[str] = (), purge: bool = False) None¶
- Remove one or more datasets from a collection and/or storage. - Parameters:
- refsIterableofDatasetRef
- Datasets to prune. These must be “resolved” references (not just a - DatasetTypeand data ID).
- disassociatebool, optional
- Disassociate pruned datasets from - tags, or from all collections if- purge=True.
- unstorebool, optional
- If - True(- Falseis default) remove these datasets from all datastores known to this butler. Note that this will make it impossible to retrieve these datasets even via other collections. Datasets that are already not stored are ignored by this option.
- tagsIterable[str], optional
- TAGGEDcollections to disassociate the datasets from. Ignored if- disassociateis- Falseor- purgeis- True.
- purgebool, optional
- If - True(- Falseis default), completely remove the dataset from the- Registry. To prevent accidental deletions,- purgemay only be- Trueif all of the following conditions are met:- This mode may remove provenance information from datasets other than those provided, and should be used with extreme care. 
 
- refs
- Raises:
- TypeError
- Raised if the butler is read-only, if no collection was provided, or the conditions for - purge=Truewere not met.
 
 
 - put(obj: Any, datasetRefOrType: DatasetRef | DatasetType | str, /, dataId: DataCoordinate | Mapping[str, Any] | None = None, *, run: str | None = None, **kwargs: Any) DatasetRef¶
- Store and register a dataset. - Parameters:
- objobject
- The dataset. 
- datasetRefOrTypeDatasetRef,DatasetType, orstr
- When - DatasetRefis provided,- dataIdshould be- None. Otherwise the- DatasetTypeor name thereof. If a fully resolved- DatasetRefis given the run and ID are used directly.
- dataIddictorDataCoordinate
- A - dictof- Dimensionlink name, value pairs that label the- DatasetRefwithin a Collection. When- None, a- DatasetRefshould be provided as the second argument.
- runstr, optional
- The name of the run the dataset should be added to, overriding - self.run. Not used if a resolved- DatasetRefis provided.
- **kwargs
- Additional keyword arguments used to augment or construct a - DataCoordinate. See- DataCoordinate.standardizeparameters. Not used if a resolve- DatasetRefis provided.
 
- obj
- Returns:
- refDatasetRef
- A reference to the stored dataset, updated with the correct id if given. 
 
- ref
- Raises:
- TypeError
- Raised if the butler is read-only or if no run has been provided. 
 
 
 - putDirect(obj: Any, ref: DatasetRef, /) DatasetRef¶
- Deprecated since version v26.0: Butler.put() now behaves like Butler.putDirect() when given a DatasetRef. Please use Butler.put(). Be aware that you may need to adjust your usage if you were relying on the run parameter to determine the run. Will be removed after v27.0. 
 - removeRuns(names: Iterable[str], unstore: bool = True) None¶
- Remove one or more - RUNcollections and the datasets within them.- Parameters:
- namesIterable[str]
- The names of the collections to remove. 
- unstorebool, optional
- If - True(default), delete datasets from all datastores in which they are present, and attempt to rollback the registry deletions if datastore deletions fail (which may not always be possible). If- False, datastore records for these datasets are still removed, but any artifacts (e.g. files) will not be.
 
- names
- Raises:
- TypeError
- Raised if one or more collections are not of type - RUN.
 
 
 - retrieveArtifacts(refs: Iterable[DatasetRef], destination: str | ParseResult | ResourcePath | Path, transfer: str = 'auto', preserve_path: bool = True, overwrite: bool = False) List[ResourcePath]¶
- Retrieve the artifacts associated with the supplied refs. - Parameters:
- refsiterable of DatasetRef
- The datasets for which artifacts are to be retrieved. A single ref can result in multiple artifacts. The refs must be resolved. 
- destinationlsst.resources.ResourcePathorstr
- Location to write the artifacts. 
- transferstr, optional
- Method to use to transfer the artifacts. Must be one of the options supported by - transfer_from(). “move” is not allowed.
- preserve_pathbool, optional
- If - Truethe full path of the artifact within the datastore is preserved. If- Falsethe final file component of the path is used.
- overwritebool, optional
- If - Trueallow transfers to overwrite existing files at the destination.
 
- refsiterable of 
- Returns:
- targetslistoflsst.resources.ResourcePath
- URIs of file artifacts in destination location. Order is not preserved. 
 
- targets
 - Notes - For non-file datastores the artifacts written to the destination may not match the representation inside the datastore. For example a hierarchical data structure in a NoSQL database may well be stored as a JSON file. 
 - transaction() Iterator[None]¶
- Context manager supporting - Butlertransactions.- Transactions can be nested. 
 - transfer_from(source_butler: LimitedButler, source_refs: Iterable[DatasetRef], transfer: str = 'auto', skip_missing: bool = True, register_dataset_types: bool = False, transfer_dimensions: bool = False) Collection[DatasetRef]¶
- Transfer datasets to this Butler from a run in another Butler. - Parameters:
- source_butlerLimitedButler
- Butler from which the datasets are to be transferred. If data IDs in - source_refsare not expanded then this has to be a full- Butlerwhose registry will be used to expand data IDs.
- source_refsiterable of DatasetRef
- Datasets defined in the source butler that should be transferred to this butler. 
- transferstr, optional
- Transfer mode passed to - transfer_from.
- skip_missingbool
- If - True, datasets with no datastore artifact associated with them are not transferred. If- Falsea registry entry will be created even if no datastore record is created (and so will look equivalent to the dataset being unstored).
- register_dataset_typesbool
- If - Trueany missing dataset types are registered. Otherwise an exception is raised.
- transfer_dimensionsbool, optional
- If - True, dimension record data associated with the new datasets will be transferred.
 
- source_butler
- Returns:
- refslistofDatasetRef
- The refs added to this Butler. 
 
- refs
 - Notes - The datastore artifact has to exist for a transfer to be made but non-existence is not an error. - Datasets that already exist in this run will be skipped. - The datasets are imported as part of a transaction, although dataset types are registered before the transaction is started. This means that it is possible for a dataset type to be registered even though transfer has failed. 
 - validateConfiguration(logFailures: bool = False, datasetTypeNames: Iterable[str] | None = None, ignore: Iterable[str] | None = None) None¶
- Validate butler configuration. - Checks that each - DatasetTypecan be stored in the- Datastore.- Parameters:
- logFailuresbool, optional
- If - True, output a log message for every validation error detected.
- datasetTypeNamesiterable of str, optional
- The - DatasetTypenames that should be checked. This allows only a subset to be selected.
- ignoreiterable of str, optional
- Names of DatasetTypes to skip over. This can be used to skip known problems. If a named - DatasetTypecorresponds to a composite, all components of that- DatasetTypewill also be ignored.
 
- logFailures
- Raises:
- ButlerValidationError
- Raised if there is some inconsistency with how this Butler is configured.