FitsExposureFormatter

class lsst.obs.base.formatters.fitsExposure.FitsExposureFormatter(fileDescriptor: FileDescriptor, dataId: Optional[DataCoordinate] = None, writeParameters: Optional[Dict[str, Any]] = None, writeRecipes: Optional[Dict[str, Any]] = None)

Bases: lsst.daf.butler.Formatter

Interface for reading and writing Exposures to and from FITS files.

This Formatter supports write recipes.

Each FitsExposureFormatter recipe for FITS compression should define image, mask and variance entries, each of which may contain compression and scaling entries. Defaults will be provided for any missing elements under compression and scaling.

The allowed entries under compression are:

  • algorithm (str): compression algorithm to use
  • rows (int): number of rows per tile (0 = entire dimension)
  • columns (int): number of columns per tile (0 = entire dimension)
  • quantizeLevel (float): cfitsio quantization level

The allowed entries under scaling are:

  • algorithm (str): scaling algorithm to use
  • bitpix (int): bits per pixel (0,8,16,32,64,-32,-64)
  • fuzz (bool): fuzz the values when quantising floating-point values?
  • seed (int): seed for random number generator when fuzzing
  • maskPlanes (list of str): mask planes to ignore when doing statistics
  • quantizeLevel (float): divisor of the standard deviation for STDEV_* scaling
  • quantizePad (float): number of stdev to allow on the low side (for STDEV_POSITIVE/NEGATIVE)
  • bscale (float): manually specified BSCALE (for MANUAL scaling)
  • bzero (float): manually specified BSCALE (for MANUAL scaling)

A very simple example YAML recipe:

lsst.obs.base.fitsExposureFormatter.FitsExposureFormatter:
  default:
    image: &default
      compression:
        algorithm: GZIP_SHUFFLE
    mask: *default
    variance: *default

Attributes Summary

dataId DataId associated with this formatter (DataCoordinate)
extension
fileDescriptor FileDescriptor associated with this formatter (FileDescriptor, read-only)
metadata The metadata read from this file.
supportedExtensions
supportedWriteParameters
unsupportedParameters
writeParameters Parameters to use when writing out datasets.
writeRecipes Detailed write Recipes indexed by recipe name.

Methods Summary

fromBytes(serializedDataset, component) Reads serialized data into a Dataset or its component.
getImageCompressionSettings(recipeName) Retrieve the relevant compression settings for this recipe.
makeUpdatedLocation(location) Return a new Location instance updated with this formatter’s extension.
name() Returns the fully qualified name of the formatter.
predictPath() Return the path that would be returned by write, without actually writing.
read([component]) Read data from a file.
readComponent(component[, parameters]) Read a component held by the Exposure.
readFull([parameters]) Read the full Exposure object.
readMetadata() Read all header metadata directly into a PropertyList.
segregateParameters(parameters, Any]] = None) Segregate the supplied parameters into those understood by the formatter and those not understood by the formatter.
stripMetadata() Remove metadata entries that are parsed into components.
toBytes(inMemoryDataset) Serialize the Dataset to bytes based on formatter.
validateExtension(location) Check that the provided location refers to a file extension that is understood by this formatter.
validateWriteRecipes(recipes) Validate supplied recipes for this formatter.
write(inMemoryDataset) Write a Python object to a file.

Attributes Documentation

dataId

DataId associated with this formatter (DataCoordinate)

extension = '.fits'
fileDescriptor

FileDescriptor associated with this formatter (FileDescriptor, read-only)

metadata

The metadata read from this file. It will be stripped as components are extracted from it (lsst.daf.base.PropertyList).

supportedExtensions = frozenset({'.fits.fz', '.fits', '.fits.gz'})
supportedWriteParameters = frozenset({'recipe'})
unsupportedParameters = frozenset()
writeParameters

Parameters to use when writing out datasets.

writeRecipes

Detailed write Recipes indexed by recipe name.

Methods Documentation

fromBytes(serializedDataset: bytes, component: Optional[str] = None) → object

Reads serialized data into a Dataset or its component.

Parameters:
serializedDataset : bytes

Bytes object to unserialize.

component : str, optional

Component to read from the Dataset. Only used if the StorageClass for reading differed from the StorageClass used to write the file.

Returns:
inMemoryDataset : object

The requested data as a Python object. The type of object is controlled by the specific formatter.

getImageCompressionSettings(recipeName)

Retrieve the relevant compression settings for this recipe.

Parameters:
recipeName : str

Label associated with the collection of compression parameters to select.

Returns:
settings : dict

The selected settings.

makeUpdatedLocation(location: lsst.daf.butler.core.location.Location) → lsst.daf.butler.core.location.Location

Return a new Location instance updated with this formatter’s extension.

Parameters:
location : Location

The location to update.

Returns:
updated : Location

A new Location with a new file extension applied.

Raises:
NotImplementedError

Raised if there is no extension attribute associated with this formatter.

Notes

This method is available to all Formatters but might not be implemented by all formatters. It requires that a formatter set an extension attribute containing the file extension used when writing files. If extension is None the supplied file will not be updated. Not all formatters write files so this is not defined in the base class.

classmethod name() → str

Returns the fully qualified name of the formatter.

Returns:
name : str

Fully-qualified name of formatter class.

predictPath() → str

Return the path that would be returned by write, without actually writing.

Uses the FileDescriptor associated with the instance.

Returns:
path : str

Path within datastore that would be associated with the location stored in this Formatter.

read(component=None)

Read data from a file.

Parameters:
component : str, optional

Component to read from the file. Only used if the StorageClass for reading differed from the StorageClass used to write the file.

Returns:
inMemoryDataset : object

The requested data as a Python object. The type of object is controlled by the specific formatter.

Raises:
ValueError

Component requested but this file does not seem to be a concrete composite.

KeyError

Raised when parameters passed with fileDescriptor are not supported.

readComponent(component, parameters=None)

Read a component held by the Exposure.

Parameters:
component : str, optional

Component to read from the file.

parameters : dict, optional

If specified, a dictionary of slicing parameters that overrides those in fileDescriptor.

Returns:
obj : component-dependent

In-memory component object.

Raises:
KeyError

Raised if the requested component cannot be handled.

readFull(parameters=None)

Read the full Exposure object.

Parameters:
parameters : dict, optional

If specified a dictionary of slicing parameters that overrides those in fileDescriptor.

Returns:
exposure : Exposure

Complete in-memory exposure.

readMetadata()

Read all header metadata directly into a PropertyList.

Returns:
metadata : PropertyList

Header metadata.

segregateParameters(parameters: Optional[Dict[str, Any]] = None) → Tuple[Dict[KT, VT], Dict[KT, VT]]

Segregate the supplied parameters into those understood by the formatter and those not understood by the formatter.

Any unsupported parameters are assumed to be usable by associated assemblers.

Parameters:
parameters : dict, optional

Parameters with values that have been supplied by the caller and which might be relevant for the formatter. If None parameters will be read from the registered FileDescriptor.

Returns:
supported : dict

Those parameters supported by this formatter.

unsupported : dict

Those parameters not supported by this formatter.

stripMetadata()

Remove metadata entries that are parsed into components.

This is only called when just the metadata is requested; stripping entries there forces code that wants other components to ask for those components directly rather than trying to extract them from the metadata manually, which is fragile. This behavior is an intentional change from Gen2.

Parameters:
metadata : PropertyList

Header metadata, to be modified in-place.

toBytes(inMemoryDataset: Any) → bytes

Serialize the Dataset to bytes based on formatter.

Parameters:
inMemoryDataset : object

The Python object to serialize.

Returns:
serializedDataset : bytes

Bytes representing the serialized dataset.

classmethod validateExtension(location: lsst.daf.butler.core.location.Location) → None

Check that the provided location refers to a file extension that is understood by this formatter.

Parameters:
location : Location

Location from which to extract a file extension.

Raises:
NotImplementedError

Raised if file extensions are a concept not understood by this formatter.

ValueError

Raised if the formatter does not understand this extension.

Notes

This method is available to all Formatters but might not be implemented by all formatters. It requires that a formatter set an extension attribute containing the file extension used when writing files. If extension is None only the set of supported extensions will be examined.

classmethod validateWriteRecipes(recipes)

Validate supplied recipes for this formatter.

The recipes are supplemented with default values where appropriate.

TODO: replace this custom validation code with Cerberus (DM-11846)

Parameters:
recipes : dict

Recipes to validate. Can be empty dict or None.

Returns:
validated : dict

Validated recipes. Returns what was given if there are no recipes listed.

Raises:
RuntimeError

Raised if validation fails.

write(inMemoryDataset)

Write a Python object to a file.

Parameters:
inMemoryDataset : object

The Python object to store.

Returns:
path : str

The URI where the primary file is stored.