CfhtIsrTask

class lsst.obs.cfht.cfhtIsrTask.CfhtIsrTask(**kwargs)

Bases: lsst.ip.isr.isrTask.IsrTask

Attributes Summary

canMultiprocess

Methods Summary

adaptArgsAndRun(inputData, inputDataIds, …) Run task algorithm on in-memory data.
applyOverrides(config) A hook to allow a task to change the values of its config after the camera-specific overrides are loaded but before any command-line overrides are applied.
convertIntToFloat(exposure) Convert exposure image from uint16 to float.
darkCorrection(exposure, darkExposure[, invert]) !Apply dark correction in place.
debugView(exposure, stepname) Utility function to examine ISR exposure at different stages.
doLinearize(detector) !Check if linearization is needed for the detector cameraGeom.
emptyMetadata() Empty (clear) the metadata for this Task and all sub-Tasks.
ensureExposure(inputExp, camera, detectorNum) Ensure that the data returned by Butler is a fully constructed exposure.
flatContext(exp, flat[, dark]) Context manager that applies and removes flats and darks, if the task is configured to apply them.
flatCorrection(exposure, flatExposure[, invert]) !Apply flat correction in place.
getAllSchemaCatalogs() Get schema catalogs for all tasks in the hierarchy, combining the results into a single dict.
getDatasetTypes(config, configClass) Return dataset type descriptors defined in task configuration.
getFullMetadata() Get metadata for all tasks.
getFullName() Get the task name as a hierarchical name including parent task names.
getInitInputDatasetTypes(config) Return dataset type descriptors that can be used to retrieve the initInputs constructor argument.
getInitOutputDatasetTypes(config) Return dataset type descriptors that can be used to write the objects returned by getOutputDatasets.
getInitOutputDatasets() Return persistable outputs that are available immediately after the task has been constructed.
getInputDatasetTypes(config) Return input dataset type descriptors for this task.
getIsrExposure(dataRef, datasetType[, immediate]) !Retrieve a calibration dataset for removing instrument signature.
getName() Get the name of the task.
getOutputDatasetTypes(config) Return output dataset type descriptors for this task.
getPerDatasetTypeDimensions(config) Return any Dimensions that are permitted to have different values for different DatasetTypes within the same quantum.
getPrerequisiteDatasetTypes(config) Return the local names of input dataset types that should be assumed to exist instead of constraining what data to process with this task.
getResourceConfig() Return resource configuration for this task.
getSchemaCatalogs() Get the schemas generated by this task.
getTaskDict() Get a dictionary of all tasks as a shallow copy.
makeDatasetType(dsConfig)
makeField(doc) Make a lsst.pex.config.ConfigurableField for this task.
makeSubtask(name, **keyArgs) Create a subtask as a new instance as the name attribute of this task.
maskAmplifier(ccdExposure, amp, defects) Identify bad amplifiers, saturated and suspect pixels.
maskAndInterpolateDefects(exposure, …) Mask and interpolate defects using mask plane “BAD”, in place.
maskAndInterpolateNan(exposure) “Mask and interpolate NaNs using mask plane “UNMASKEDNAN”, in place.
maskDefect(exposure, defectBaseList) !Mask defects using mask plane “BAD”, in place.
maskNan(exposure) Mask NaNs using mask plane “UNMASKEDNAN”, in place.
measureBackground(exposure[, IsrQaConfig]) Measure the image background in subgrids, for quality control purposes.
overscanCorrection(ccdExposure, amp) Apply overscan correction in place.
parseAndRun([args, config, log, doReturnResults]) Parse an argument list and run the command.
readIsrData(dataRef, rawExposure) !Retrieve necessary frames for instrument signature removal.
roughZeroPoint(exposure) Set an approximate magnitude zero point for the exposure.
run(ccdExposure[, bias, linearizer, dark, …]) Perform instrument signature removal on an exposure
runDataRef(sensorRef) Perform instrument signature removal on a ButlerDataRef of a Sensor.
runQuantum(quantum, butler) Execute PipelineTask algorithm on single quantum of data.
saturationDetection(exposure, amp) !Detect saturated pixels and mask them using mask plane config.saturatedMaskName, in place.
saturationInterpolation(exposure) !Interpolate over saturated pixels, in place.
saveStruct(struct, outputDataRefs, butler) Save data in butler.
setValidPolygonIntersect(ccdExposure, fpPolygon) !Set the valid polygon as the intersection of fpPolygon and the ccd corners.
suspectDetection(exposure, amp) !Detect suspect pixels and mask them using mask plane config.suspectMaskName, in place.
timer(name[, logLevel]) Context manager to log performance data for an arbitrary block of code.
updateVariance(ampExposure, amp[, overscanImage]) Set the variance plane using the amplifier gain and read noise
writeConfig(butler[, clobber, doBackup]) Write the configuration used for processing the data, or check that an existing one is equal to the new one if present.
writeMetadata(dataRef) Write the metadata produced from processing the data.
writePackageVersions(butler[, clobber, …]) Compare and write package versions.
writeSchemas(butler[, clobber, doBackup]) Write the schemas returned by lsst.pipe.base.Task.getAllSchemaCatalogs.

Attributes Documentation

canMultiprocess = True

Methods Documentation

adaptArgsAndRun(inputData, inputDataIds, outputDataIds, butler)

Run task algorithm on in-memory data.

This method is called by runQuantum to operate on input in-memory data and produce coressponding output in-memory data. It receives arguments which are dictionaries with input data and input/output DataIds. Many simple tasks do not need to know DataIds so default implementation of this method calls run method passing input data objects as keyword arguments. Most simple tasks will implement run method, more complex tasks that need to know about output DataIds will override this method instead.

All three arguments to this method are dictionaries with keys equal to the name of the configuration fields for dataset type. If dataset type is configured with scalar fiels set to True then it is expected that only one dataset appears on input or output for that dataset type and dictionary value will be a single data object or DataId. Otherwise if scalar is False (default) then value will be a list (even if only one item is in the list).

The method returns Struct instance with attributes matching the configuration fields for output dataset types. Values stored in returned struct are single object if scalar is True or list of objects otherwise. If tasks produces more than one object for some dataset type then data objects returned in struct must match in count and order corresponding DataIds in outputDataIds.

Parameters:
inputData : dict

Dictionary whose keys are the names of the configuration fields describing input dataset types and values are Python-domain data objects (or lists of objects) retrieved from data butler.

inputDataIds : dict

Dictionary whose keys are the names of the configuration fields describing input dataset types and values are DataIds (or lists of DataIds) that task consumes for corresponding dataset type. DataIds are guaranteed to match data objects in inputData

outputDataIds : dict

Dictionary whose keys are the names of the configuration fields describing output dataset types and values are DataIds (or lists of DataIds) that task is to produce for corresponding dataset type.

Returns:
struct : Struct

Standard convention is that this method should return Struct instance containing all output data. Struct attribute names should correspond to the names of the configuration fields describing task output dataset types. If something different is returned then saveStruct method has to be re-implemented accordingly.

classmethod applyOverrides(config)

A hook to allow a task to change the values of its config after the camera-specific overrides are loaded but before any command-line overrides are applied.

Parameters:
config : instance of task’s ConfigClass

Task configuration.

Notes

This is necessary in some cases because the camera-specific overrides may retarget subtasks, wiping out changes made in ConfigClass.setDefaults. See LSST Trac ticket #2282 for more discussion.

Warning

This is called by CmdLineTask.parseAndRun; other ways of constructing a config will not apply these overrides.

convertIntToFloat(exposure)

Convert exposure image from uint16 to float.

If the exposure does not need to be converted, the input is immediately returned. For exposures that are converted to use floating point pixels, the variance is set to unity and the mask to zero.

Parameters:
exposure : lsst.afw.image.Exposure

The raw exposure to be converted.

Returns:
newexposure : lsst.afw.image.Exposure

The input exposure, converted to floating point pixels.

Raises:
RuntimeError

Raised if the exposure type cannot be converted to float.

darkCorrection(exposure, darkExposure, invert=False)

!Apply dark correction in place.

Parameters:
exposure : lsst.afw.image.Exposure

Exposure to process.

darkExposure : lsst.afw.image.Exposure

Dark exposure of the same size as exposure.

invert : Bool, optional

If True, re-add the dark to an already corrected image.

Raises:
RuntimeError

Raised if either exposure or darkExposure do not have their dark time defined.

See also

lsst.ip.isr.isrFunctions.darkCorrection

debugView(exposure, stepname)

Utility function to examine ISR exposure at different stages.

Parameters:
exposure : lsst.afw.image.Exposure

Exposure to view.

stepname : str

State of processing to view.

doLinearize(detector)

!Check if linearization is needed for the detector cameraGeom.

Checks config.doLinearize and the linearity type of the first amplifier.

Parameters:
detector : lsst.afw.cameraGeom.Detector

Detector to get linearity type from.

Returns:
doLinearize : Bool

If True, linearization should be performed.

emptyMetadata()

Empty (clear) the metadata for this Task and all sub-Tasks.

ensureExposure(inputExp, camera, detectorNum)

Ensure that the data returned by Butler is a fully constructed exposure.

ISR requires exposure-level image data for historical reasons, so if we did not recieve that from Butler, construct it from what we have, modifying the input in place.

Parameters:
inputExp : lsst.afw.image.Exposure, lsst.afw.image.DecoratedImageU, or

lsst.afw.image.ImageF

The input data structure obtained from Butler.

camera : lsst.afw.cameraGeom.camera

The camera associated with the image. Used to find the appropriate detector.

detectorNum : int

The detector this exposure should match.

Returns:
inputExp : lsst.afw.image.Exposure

The re-constructed exposure, with appropriate detector parameters.

Raises:
TypeError

Raised if the input data cannot be used to construct an exposure.

flatContext(exp, flat, dark=None)

Context manager that applies and removes flats and darks, if the task is configured to apply them.

Parameters:
exp : lsst.afw.image.Exposure

Exposure to process.

flat : lsst.afw.image.Exposure

Flat exposure the same size as exp.

dark : lsst.afw.image.Exposure, optional

Dark exposure the same size as exp.

Yields:
exp : lsst.afw.image.Exposure

The flat and dark corrected exposure.

flatCorrection(exposure, flatExposure, invert=False)

!Apply flat correction in place.

Parameters:
exposure : lsst.afw.image.Exposure

Exposure to process.

flatExposure : lsst.afw.image.Exposure

Flat exposure of the same size as exposure.

invert : Bool, optional

If True, unflatten an already flattened image.

See also

lsst.ip.isr.isrFunctions.flatCorrection

getAllSchemaCatalogs()

Get schema catalogs for all tasks in the hierarchy, combining the results into a single dict.

Returns:
schemacatalogs : dict

Keys are butler dataset type, values are a empty catalog (an instance of the appropriate lsst.afw.table Catalog type) for all tasks in the hierarchy, from the top-level task down through all subtasks.

Notes

This method may be called on any task in the hierarchy; it will return the same answer, regardless.

The default implementation should always suffice. If your subtask uses schemas the override Task.getSchemaCatalogs, not this method.

classmethod getDatasetTypes(config, configClass)

Return dataset type descriptors defined in task configuration.

This method can be used by other methods that need to extract dataset types from task configuration (e.g. getInputDatasetTypes or sub-class methods).

Parameters:
config : Config

Configuration for this task. Typically datasets are defined in a task configuration.

configClass : type

Class of the configuration object which defines dataset type.

Returns:
Dictionary where key is the name (arbitrary) of the output dataset
and value is the `DatasetTypeDescriptor` instance. Default
implementation uses configuration field name as dictionary key.
Returns empty dict if configuration has no fields with the specified
``configClass``.
getFullMetadata()

Get metadata for all tasks.

Returns:
metadata : lsst.daf.base.PropertySet

The PropertySet keys are the full task name. Values are metadata for the top-level task and all subtasks, sub-subtasks, etc..

Notes

The returned metadata includes timing information (if @timer.timeMethod is used) and any metadata set by the task. The name of each item consists of the full task name with . replaced by :, followed by . and the name of the item, e.g.:

topLevelTaskName:subtaskName:subsubtaskName.itemName

using : in the full task name disambiguates the rare situation that a task has a subtask and a metadata item with the same name.

getFullName()

Get the task name as a hierarchical name including parent task names.

Returns:
fullName : str

The full name consists of the name of the parent task and each subtask separated by periods. For example:

  • The full name of top-level task “top” is simply “top”.
  • The full name of subtask “sub” of top-level task “top” is “top.sub”.
  • The full name of subtask “sub2” of subtask “sub” of top-level task “top” is “top.sub.sub2”.
classmethod getInitInputDatasetTypes(config)

Return dataset type descriptors that can be used to retrieve the initInputs constructor argument.

Datasets used in initialization may not be associated with any Dimension (i.e. their data IDs must be empty dictionaries).

Default implementation finds all fields of type InitInputInputDatasetConfig in configuration (non-recursively) and uses them for constructing DatasetTypeDescriptor instances. The names of these fields are used as keys in returned dictionary. Subclasses can override this behavior.

Parameters:
config : Config

Configuration for this task. Typically datasets are defined in a task configuration.

Returns:
Dictionary where key is the name (arbitrary) of the input dataset
and value is the `DatasetTypeDescriptor` instance. Default
implementation uses configuration field name as dictionary key.
When the task requires no initialization inputs, should return an
empty dict.
classmethod getInitOutputDatasetTypes(config)

Return dataset type descriptors that can be used to write the objects returned by getOutputDatasets.

Datasets used in initialization may not be associated with any Dimension (i.e. their data IDs must be empty dictionaries).

Default implementation finds all fields of type InitOutputDatasetConfig in configuration (non-recursively) and uses them for constructing DatasetTypeDescriptor instances. The names of these fields are used as keys in returned dictionary. Subclasses can override this behavior.

Parameters:
config : Config

Configuration for this task. Typically datasets are defined in a task configuration.

Returns:
Dictionary where key is the name (arbitrary) of the output dataset
and value is the `DatasetTypeDescriptor` instance. Default
implementation uses configuration field name as dictionary key.
When the task produces no initialization outputs, should return an
empty dict.
getInitOutputDatasets()

Return persistable outputs that are available immediately after the task has been constructed.

Subclasses that operate on catalogs should override this method to return the schema(s) of the catalog(s) they produce.

It is not necessary to return the PipelineTask’s configuration or other provenance information in order for it to be persisted; that is the responsibility of the execution system.

Returns:
datasets : dict

Dictionary with keys that match those of the dict returned by getInitOutputDatasetTypes values that can be written by calling Butler.put with those DatasetTypes and no data IDs. An empty dict should be returned by tasks that produce no initialization outputs.

classmethod getInputDatasetTypes(config)

Return input dataset type descriptors for this task.

Default implementation finds all fields of type InputDatasetConfig in configuration (non-recursively) and uses them for constructing DatasetTypeDescriptor instances. The names of these fields are used as keys in returned dictionary. Subclasses can override this behavior.

Parameters:
config : Config

Configuration for this task. Typically datasets are defined in a task configuration.

Returns:
Dictionary where key is the name (arbitrary) of the input dataset
and value is the `DatasetTypeDescriptor` instance. Default
implementation uses configuration field name as dictionary key.
getIsrExposure(dataRef, datasetType, immediate=True)

!Retrieve a calibration dataset for removing instrument signature.

Parameters:
dataRef : daf.persistence.butlerSubset.ButlerDataRef

DataRef of the detector data to find calibration datasets for.

datasetType : str

Type of dataset to retrieve (e.g. ‘bias’, ‘flat’, etc).

immediate : Bool

If True, disable butler proxies to enable error handling within this routine.

Returns:
exposure : lsst.afw.image.Exposure

Requested calibration frame.

Raises:
RuntimeError

Raised if no matching calibration frame can be found.

getName()

Get the name of the task.

Returns:
taskName : str

Name of the task.

See also

getFullName

classmethod getOutputDatasetTypes(config)

Return output dataset type descriptors for this task.

Default implementation finds all fields of type OutputDatasetConfig in configuration (non-recursively) and uses them for constructing DatasetTypeDescriptor instances. The keys of these fields are used as keys in returned dictionary. Subclasses can override this behavior.

Parameters:
config : Config

Configuration for this task. Typically datasets are defined in a task configuration.

Returns:
Dictionary where key is the name (arbitrary) of the output dataset
and value is the `DatasetTypeDescriptor` instance. Default
implementation uses configuration field name as dictionary key.
classmethod getPerDatasetTypeDimensions(config)

Return any Dimensions that are permitted to have different values for different DatasetTypes within the same quantum.

Parameters:
config : Config

Configuration for this task.

Returns:
dimensions : Set of Dimension or str

The dimensions or names thereof that should be considered per-DatasetType.

Notes

Any Dimension declared to be per-DatasetType by a PipelineTask must also be declared to be per-DatasetType by other PipelineTasks in the same Pipeline.

The classic example of a per-DatasetType dimension is the CalibrationLabel dimension that maps to a validity range for master calibrations. When running Instrument Signature Removal, one does not care that different dataset types like flat, bias, and dark have different validity ranges, as long as those validity ranges all overlap the relevant observation.

classmethod getPrerequisiteDatasetTypes(config)

Return the local names of input dataset types that should be assumed to exist instead of constraining what data to process with this task.

Usually, when running a PipelineTask, the presence of input datasets constrains the processing to be done (as defined by the QuantumGraph generated during “preflight”). “Prerequisites” are special input datasets that do not constrain that graph, but instead cause a hard failure when missing. Calibration products and reference catalogs are examples of dataset types that should usually be marked as prerequisites.

Parameters:
config : Config

Configuration for this task. Typically datasets are defined in a task configuration.

Returns:
prerequisite : Set of str

The keys in the dictionary returned by getInputDatasetTypes that represent dataset types that should be considered prerequisites. Names returned here that are not keys in that dictionary are ignored; that way, if a config option removes an input dataset type only getInputDatasetTypes needs to be updated.

getResourceConfig()

Return resource configuration for this task.

Returns:
Object of type `~config.ResourceConfig` or ``None`` if resource
configuration is not defined for this task.
getSchemaCatalogs()

Get the schemas generated by this task.

Returns:
schemaCatalogs : dict

Keys are butler dataset type, values are an empty catalog (an instance of the appropriate lsst.afw.table Catalog type) for this task.

See also

Task.getAllSchemaCatalogs

Notes

Warning

Subclasses that use schemas must override this method. The default implemenation returns an empty dict.

This method may be called at any time after the Task is constructed, which means that all task schemas should be computed at construction time, not when data is actually processed. This reflects the philosophy that the schema should not depend on the data.

Returning catalogs rather than just schemas allows us to save e.g. slots for SourceCatalog as well.

getTaskDict()

Get a dictionary of all tasks as a shallow copy.

Returns:
taskDict : dict

Dictionary containing full task name: task object for the top-level task and all subtasks, sub-subtasks, etc..

makeDatasetType(dsConfig)
classmethod makeField(doc)

Make a lsst.pex.config.ConfigurableField for this task.

Parameters:
doc : str

Help text for the field.

Returns:
configurableField : lsst.pex.config.ConfigurableField

A ConfigurableField for this task.

Examples

Provides a convenient way to specify this task is a subtask of another task.

Here is an example of use:

class OtherTaskConfig(lsst.pex.config.Config)
    aSubtask = ATaskClass.makeField("a brief description of what this task does")
makeSubtask(name, **keyArgs)

Create a subtask as a new instance as the name attribute of this task.

Parameters:
name : str

Brief name of the subtask.

keyArgs

Extra keyword arguments used to construct the task. The following arguments are automatically provided and cannot be overridden:

  • “config”.
  • “parentTask”.

Notes

The subtask must be defined by Task.config.name, an instance of pex_config ConfigurableField or RegistryField.

maskAmplifier(ccdExposure, amp, defects)

Identify bad amplifiers, saturated and suspect pixels.

Parameters:
ccdExposure : lsst.afw.image.Exposure

Input exposure to be masked.

amp : lsst.afw.table.AmpInfoCatalog

Catalog of parameters defining the amplifier on this exposure to mask.

defects : lsst.meas.algorithms.Defects

List of defects. Used to determine if the entire amplifier is bad.

Returns:
badAmp : Bool

If this is true, the entire amplifier area is covered by defects and unusable.

maskAndInterpolateDefects(exposure, defectBaseList)

Mask and interpolate defects using mask plane “BAD”, in place.

Parameters:
exposure : lsst.afw.image.Exposure

Exposure to process.

defectBaseList : List of Defects
maskAndInterpolateNan(exposure)

“Mask and interpolate NaNs using mask plane “UNMASKEDNAN”, in place.

Parameters:
exposure : lsst.afw.image.Exposure

Exposure to process.

See also

lsst.ip.isr.isrTask.maskNan

maskDefect(exposure, defectBaseList)

!Mask defects using mask plane “BAD”, in place.

Parameters:
exposure : lsst.afw.image.Exposure

Exposure to process.

defectBaseList : lsst.meas.algorithms.Defects or list of

lsst.afw.image.DefectBase.

List of defects to mask and interpolate.

Notes

Call this after CCD assembly, since defects may cross amplifier boundaries.

maskNan(exposure)

Mask NaNs using mask plane “UNMASKEDNAN”, in place.

Parameters:
exposure : lsst.afw.image.Exposure

Exposure to process.

Notes

We mask over all NaNs, including those that are masked with other bits (because those may or may not be interpolated over later, and we want to remove all NaNs). Despite this behaviour, the “UNMASKEDNAN” mask plane is used to preserve the historical name.

measureBackground(exposure, IsrQaConfig=None)

Measure the image background in subgrids, for quality control purposes.

Parameters:
exposure : lsst.afw.image.Exposure

Exposure to process.

IsrQaConfig : lsst.ip.isr.isrQa.IsrQaConfig

Configuration object containing parameters on which background statistics and subgrids to use.

overscanCorrection(ccdExposure, amp)

Apply overscan correction in place.

This method does initial pixel rejection of the overscan region. The overscan can also be optionally segmented to allow for discontinuous overscan responses to be fit separately. The actual overscan subtraction is performed by the lsst.ip.isr.isrFunctions.overscanCorrection function, which is called here after the amplifier is preprocessed.

Parameters:
ccdExposure : lsst.afw.image.Exposure

Exposure to have overscan correction performed.

amp : lsst.afw.table.AmpInfoCatalog

The amplifier to consider while correcting the overscan.

Returns:
overscanResults : lsst.pipe.base.Struct

Result struct with components: - imageFit : scalar or lsst.afw.image.Image

Value or fit subtracted from the amplifier image data.

  • overscanFit : scalar or lsst.afw.image.Image

    Value or fit subtracted from the overscan image data.

  • overscanImage : lsst.afw.image.Image

    Image of the overscan region with the overscan correction applied. This quantity is used to estimate the amplifier read noise empirically.

Raises:
RuntimeError

Raised if the amp does not contain raw pixel information.

See also

lsst.ip.isr.isrFunctions.overscanCorrection

classmethod parseAndRun(args=None, config=None, log=None, doReturnResults=False)

Parse an argument list and run the command.

Parameters:
args : list, optional

List of command-line arguments; if None use sys.argv.

config : lsst.pex.config.Config-type, optional

Config for task. If None use Task.ConfigClass.

log : lsst.log.Log-type, optional

Log. If None use the default log.

doReturnResults : bool, optional

If True, return the results of this task. Default is False. This is only intended for unit tests and similar use. It can easily exhaust memory (if the task returns enough data and you call it enough times) and it will fail when using multiprocessing if the returned data cannot be pickled.

Returns:
struct : lsst.pipe.base.Struct

Fields are:

  • argumentParser: the argument parser.
  • parsedCmd: the parsed command returned by the argument parser’s lsst.pipe.base.ArgumentParser.parse_args method.
  • taskRunner: the task runner used to run the task (an instance of Task.RunnerClass).
  • resultList: results returned by the task runner’s run method, one entry per invocation.
    This will typically be a list of None unless doReturnResults is True; see Task.RunnerClass (TaskRunner by default) for more information.

Notes

Calling this method with no arguments specified is the standard way to run a command-line task from the command-line. For an example see pipe_tasks bin/makeSkyMap.py or almost any other file in that directory.

If one or more of the dataIds fails then this routine will exit (with a status giving the number of failed dataIds) rather than returning this struct; this behaviour can be overridden by specifying the --noExit command-line option.

readIsrData(dataRef, rawExposure)

!Retrieve necessary frames for instrument signature removal.

Pre-fetching all required ISR data products limits the IO required by the ISR. Any conflict between the calibration data available and that needed for ISR is also detected prior to doing processing, allowing it to fail quickly.

Parameters:
dataRef : daf.persistence.butlerSubset.ButlerDataRef

Butler reference of the detector data to be processed

rawExposure : afw.image.Exposure

The raw exposure that will later be corrected with the retrieved calibration data; should not be modified in this method.

Returns:
result : lsst.pipe.base.Struct

Result struct with components (which may be None): - bias: bias calibration frame (afw.image.Exposure) - linearizer: functor for linearization (ip.isr.linearize.LinearizeBase) - crosstalkSources: list of possible crosstalk sources (list) - dark: dark calibration frame (afw.image.Exposure) - flat: flat calibration frame (afw.image.Exposure) - bfKernel: Brighter-Fatter kernel (numpy.ndarray) - defects: list of defects (lsst.meas.algorithms.Defects) - fringes: lsst.pipe.base.Struct with components:

  • fringes: fringe calibration frame (afw.image.Exposure)
  • seed: random seed derived from the ccdExposureId for random
    number generator (uint32)
  • opticsTransmission: lsst.afw.image.TransmissionCurve

    A TransmissionCurve that represents the throughput of the optics, to be evaluated in focal-plane coordinates.

  • filterTransmission : lsst.afw.image.TransmissionCurve

    A TransmissionCurve that represents the throughput of the filter itself, to be evaluated in focal-plane coordinates.

  • sensorTransmission : lsst.afw.image.TransmissionCurve

    A TransmissionCurve that represents the throughput of the sensor itself, to be evaluated in post-assembly trimmed detector coordinates.

  • atmosphereTransmission : lsst.afw.image.TransmissionCurve

    A TransmissionCurve that represents the throughput of the atmosphere, assumed to be spatially constant.

  • strayLightData : object

    An opaque object containing calibration information for stray-light correction. If None, no correction will be performed.

Raises:
NotImplementedError :

Raised if a per-amplifier brighter-fatter kernel is requested by the configuration.

roughZeroPoint(exposure)

Set an approximate magnitude zero point for the exposure.

Parameters:
exposure : lsst.afw.image.Exposure

Exposure to process.

run(ccdExposure, bias=None, linearizer=None, dark=None, flat=None, defects=None, fringes=None, bfKernel=None, camera=None, **kwds)

Perform instrument signature removal on an exposure

Steps include: - Detect saturation, apply overscan correction, bias, dark and flat - Perform CCD assembly - Interpolate over defects, saturated pixels and all NaNs - Persist the ISR-corrected exposure as “postISRCCD” if

config.doWrite is True
Parameters:
ccdExposure : lsst.afw.image.Exposure

Detector data.

bias : lsst.afw.image.exposure

Exposure of bias frame.

linearizer : lsst.ip.isr.LinearizeBase callable

Linearizing functor; a subclass of lsst.ip.isr.LinearizeBase.

dark : lsst.afw.image.exposure

Exposure of dark frame.

flat : lsst.afw.image.exposure

Exposure of flatfield.

defects : list

list of detects

fringes : lsst.afw.image.exposure or list of lsst.afw.image.exposure

exposure of fringe frame or list of fringe exposure

bfKernel : None

kernel used for brighter-fatter correction; currently unsupported

camera : lsst.afw.cameraGeom.Camera

Camera geometry, used by addDistortionModel.

**kwds : dict

additional kwargs forwarded to IsrTask.run.

Returns:
struct : lsst.pipe.base.Struct with fields:
  • exposure: the exposure after application of ISR
runDataRef(sensorRef)

Perform instrument signature removal on a ButlerDataRef of a Sensor.

This method contains the CmdLineTask interface to the ISR processing. All IO is handled here, freeing the run() method to manage only pixel-level calculations. The steps performed are: - Read in necessary detrending/isr/calibration data. - Process raw exposure in run(). - Persist the ISR-corrected exposure as “postISRCCD” if

config.doWrite=True.
Parameters:
sensorRef : daf.persistence.butlerSubset.ButlerDataRef

DataRef of the detector data to be processed

Returns:
result : lsst.pipe.base.Struct

Result struct with component: - exposure : afw.image.Exposure

The fully ISR corrected exposure.

Raises:
RuntimeError

Raised if a configuration option is set to True, but the required calibration data does not exist.

runQuantum(quantum, butler)

Execute PipelineTask algorithm on single quantum of data.

Typical implementation of this method will use inputs from quantum to retrieve Python-domain objects from data butler and call adaptArgsAndRun method on that data. On return from adaptArgsAndRun this method will extract data from returned Struct instance and save that data to butler.

The Struct returned from adaptArgsAndRun is expected to contain data attributes with the names equal to the names of the configuration fields defining output dataset types. The values of the data attributes must be data objects corresponding to the DataIds of output dataset types. All data objects will be saved in butler using DataRefs from Quantum’s output dictionary.

This method does not return anything to the caller, on errors corresponding exception is raised.

Parameters:
quantum : Quantum

Object describing input and output corresponding to this invocation of PipelineTask instance.

butler : object

Data butler instance.

Raises:
`ScalarError` if a dataset type is configured as scalar but receives
multiple DataIds in `quantum`. Any exceptions that happen in data
butler or in `adaptArgsAndRun` method.
saturationDetection(exposure, amp)

!Detect saturated pixels and mask them using mask plane config.saturatedMaskName, in place.

Parameters:
exposure : lsst.afw.image.Exposure

Exposure to process. Only the amplifier DataSec is processed.

amp : lsst.afw.table.AmpInfoCatalog

Amplifier detector data.

See also

lsst.ip.isr.isrFunctions.makeThresholdMask

saturationInterpolation(exposure)

!Interpolate over saturated pixels, in place.

This method should be called after saturationDetection, to ensure that the saturated pixels have been identified in the SAT mask. It should also be called after assembleCcd, since saturated regions may cross amplifier boundaries.

Parameters:
exposure : lsst.afw.image.Exposure

Exposure to process.

See also

lsst.ip.isr.isrTask.saturationDetection, lsst.ip.isr.isrFunctions.interpolateFromMask

saveStruct(struct, outputDataRefs, butler)

Save data in butler.

Convention is that struct returned from run() method has data field(s) with the same names as the config fields defining output DatasetTypes. Subclasses may override this method to implement different convention for Struct content or in case any post-processing of data may be needed.

Parameters:
struct : Struct

Data produced by the task packed into Struct instance

outputDataRefs : dict

Dictionary whose keys are the names of the configuration fields describing output dataset types and values are lists of DataRefs. DataRefs must match corresponding data objects in struct in number and order.

butler : object

Data butler instance.

setValidPolygonIntersect(ccdExposure, fpPolygon)

!Set the valid polygon as the intersection of fpPolygon and the ccd corners.

Parameters:
ccdExposure : lsst.afw.image.Exposure

Exposure to process.

fpPolygon : lsst.afw.geom.Polygon

Polygon in focal plane coordinates.

suspectDetection(exposure, amp)

!Detect suspect pixels and mask them using mask plane config.suspectMaskName, in place.

Parameters:
exposure : lsst.afw.image.Exposure

Exposure to process. Only the amplifier DataSec is processed.

amp : lsst.afw.table.AmpInfoCatalog

Amplifier detector data.

See also

lsst.ip.isr.isrFunctions.makeThresholdMask

Notes

Suspect pixels are pixels whose value is greater than amp.getSuspectLevel(). This is intended to indicate pixels that may be affected by unknown systematics; for example if non-linearity corrections above a certain level are unstable then that would be a useful value for suspectLevel. A value of nan indicates that no such level exists and no pixels are to be masked as suspicious.

timer(name, logLevel=10000)

Context manager to log performance data for an arbitrary block of code.

Parameters:
name : str

Name of code being timed; data will be logged using item name: Start and End.

logLevel

A lsst.log level constant.

See also

timer.logInfo

Examples

Creating a timer context:

with self.timer("someCodeToTime"):
    pass  # code to time
updateVariance(ampExposure, amp, overscanImage=None)

Set the variance plane using the amplifier gain and read noise

The read noise is calculated from the overscanImage if the doEmpiricalReadNoise option is set in the configuration; otherwise the value from the amplifier data is used.

Parameters:
ampExposure : lsst.afw.image.Exposure

Exposure to process.

amp : lsst.afw.table.AmpInfoRecord or FakeAmp

Amplifier detector data.

overscanImage : lsst.afw.image.MaskedImage, optional.

Image of overscan, required only for empirical read noise.

See also

lsst.ip.isr.isrFunctions.updateVariance

writeConfig(butler, clobber=False, doBackup=True)

Write the configuration used for processing the data, or check that an existing one is equal to the new one if present.

Parameters:
butler : lsst.daf.persistence.Butler

Data butler used to write the config. The config is written to dataset type CmdLineTask._getConfigName.

clobber : bool, optional

A boolean flag that controls what happens if a config already has been saved: - True: overwrite or rename the existing config, depending on doBackup. - False: raise TaskError if this config does not match the existing config.

doBackup : bool, optional

Set to True to backup the config files if clobbering.

writeMetadata(dataRef)

Write the metadata produced from processing the data.

Parameters:
dataRef

Butler data reference used to write the metadata. The metadata is written to dataset type CmdLineTask._getMetadataName.

writePackageVersions(butler, clobber=False, doBackup=True, dataset='packages')

Compare and write package versions.

Parameters:
butler : lsst.daf.persistence.Butler

Data butler used to read/write the package versions.

clobber : bool, optional

A boolean flag that controls what happens if versions already have been saved: - True: overwrite or rename the existing version info, depending on doBackup. - False: raise TaskError if this version info does not match the existing.

doBackup : bool, optional

If True and clobbering, old package version files are backed up.

dataset : str, optional

Name of dataset to read/write.

Raises:
TaskError

Raised if there is a version mismatch with current and persisted lists of package versions.

Notes

Note that this operation is subject to a race condition.

writeSchemas(butler, clobber=False, doBackup=True)

Write the schemas returned by lsst.pipe.base.Task.getAllSchemaCatalogs.

Parameters:
butler : lsst.daf.persistence.Butler

Data butler used to write the schema. Each schema is written to the dataset type specified as the key in the dict returned by getAllSchemaCatalogs.

clobber : bool, optional

A boolean flag that controls what happens if a schema already has been saved: - True: overwrite or rename the existing schema, depending on doBackup. - False: raise TaskError if this schema does not match the existing schema.

doBackup : bool, optional

Set to True to backup the schema files if clobbering.

Notes

If clobber is False and an existing schema does not match a current schema, then some schemas may have been saved successfully and others may not, and there is no easy way to tell which is which.