VerifyPtcAnalysisTask

class lsst.analysis.tools.tasks.VerifyPtcAnalysisTask(*, config: PipelineTaskConfig | None = None, log: logging.Logger | LsstLogAdapter | None = None, initInputs: dict[str, Any] | None = None, **kwargs: Any)

Bases: AnalysisPipelineTask

Attributes Summary

canMultiprocess

warnings_all

Methods Summary

collectInputNames()

Get the names of the inputs.

emptyMetadata()

Empty (clear) the metadata for this Task and all sub-Tasks.

getFullMetadata()

Get metadata for all tasks.

getFullName()

Get the task name as a hierarchical name including parent task names.

getName()

Get the name of the task.

getTaskDict()

Get a dictionary of all tasks as a shallow copy.

loadData(handle[, names])

Load the minimal set of keyed data from the input dataset.

makeField(doc)

Make a lsst.pex.config.ConfigurableField for this task.

makeSubtask(name, **keyArgs)

Create a subtask as a new instance as the name attribute of this task.

parsePlotInfo(inputs, dataId[, connectionName])

Parse the inputs and dataId to get the information needed to to add to the figure.

run(*[, data])

Produce the outputs associated with this PipelineTask.

runQuantum(butlerQC, inputRefs, outputRefs)

Override default runQuantum to load the minimal columns necessary to complete the action.

timer(name[, logLevel])

Context manager to log performance data for an arbitrary block of code.

Attributes Documentation

canMultiprocess: ClassVar[bool] = True
warnings_all = ('divide by zero encountered in divide', 'invalid value encountered in arcsin', 'invalid value encountered in cos', 'invalid value encountered in divide', 'invalid value encountered in log10', 'invalid value encountered in scalar divide', 'invalid value encountered in sin', 'invalid value encountered in sqrt', 'invalid value encountered in true_divide', 'Mean of empty slice')

Methods Documentation

collectInputNames() Iterable[str]

Get the names of the inputs.

If using the default loadData method this will gather the names of the keys to be loaded from an input dataset.

Returns:
inputsIterable of str

The names of the keys in the KeyedData object to extract.

emptyMetadata() None

Empty (clear) the metadata for this Task and all sub-Tasks.

getFullMetadata() TaskMetadata

Get metadata for all tasks.

Returns:
metadataTaskMetadata

The keys are the full task name. Values are metadata for the top-level task and all subtasks, sub-subtasks, etc.

Notes

The returned metadata includes timing information (if @timer.timeMethod is used) and any metadata set by the task. The name of each item consists of the full task name with . replaced by :, followed by . and the name of the item, e.g.:

topLevelTaskName:subtaskName:subsubtaskName.itemName

using : in the full task name disambiguates the rare situation that a task has a subtask and a metadata item with the same name.

getFullName() str

Get the task name as a hierarchical name including parent task names.

Returns:
fullNamestr

The full name consists of the name of the parent task and each subtask separated by periods. For example:

  • The full name of top-level task “top” is simply “top”.

  • The full name of subtask “sub” of top-level task “top” is “top.sub”.

  • The full name of subtask “sub2” of subtask “sub” of top-level task “top” is “top.sub.sub2”.

getName() str

Get the name of the task.

Returns:
taskNamestr

Name of the task.

See also

getFullName

Get the full name of the task.

getTaskDict() dict[str, weakref.ReferenceType[lsst.pipe.base.task.Task]]

Get a dictionary of all tasks as a shallow copy.

Returns:
taskDictdict

Dictionary containing full task name: task object for the top-level task and all subtasks, sub-subtasks, etc.

loadData(handle: DeferredDatasetHandle, names: Iterable[str] | None = None) KeyedData

Load the minimal set of keyed data from the input dataset.

Parameters:
handleDeferredDatasetHandle

Handle to load the dataset with only the specified columns.

namesIterable of str

The names of keys to extract from the dataset. If names is None then the collectInputNames method is called to generate the names. For most purposes these are the names of columns to load from a catalog or data frame.

Returns:
result: KeyedData

The dataset with only the specified keys loaded.

classmethod makeField(doc: str) ConfigurableField

Make a lsst.pex.config.ConfigurableField for this task.

Parameters:
docstr

Help text for the field.

Returns:
configurableFieldlsst.pex.config.ConfigurableField

A ConfigurableField for this task.

Examples

Provides a convenient way to specify this task is a subtask of another task.

Here is an example of use:

class OtherTaskConfig(lsst.pex.config.Config):
    aSubtask = ATaskClass.makeField("brief description of task")
makeSubtask(name: str, **keyArgs: Any) None

Create a subtask as a new instance as the name attribute of this task.

Parameters:
namestr

Brief name of the subtask.

**keyArgs

Extra keyword arguments used to construct the task. The following arguments are automatically provided and cannot be overridden:

  • config.

  • parentTask.

Notes

The subtask must be defined by Task.config.name, an instance of ConfigurableField or RegistryField.

parsePlotInfo(inputs: Mapping[str, Any] | None, dataId: DataCoordinate | None, connectionName: str = 'data') Mapping[str, str]

Parse the inputs and dataId to get the information needed to to add to the figure.

Parameters:
inputs: `dict`

The inputs to the task

dataCoordinate: `lsst.daf.butler.DataCoordinate`

The dataId that the task is being run on.

connectionName: `str`, optional

Name of the input connection to use for determining table name.

Returns:
plotInfodict
run(*, data: MutableMapping[str, ndarray[Any, dtype[ScalarType]] | Scalar | HealSparseMap | Tensor] | None = None, **kwargs) Struct

Produce the outputs associated with this PipelineTask.

Parameters:
dataKeyedData

The input data from which all AnalysisTools will run and produce outputs. A side note, the python typing specifies that this can be None, but this is only due to a limitation in python where in order to specify that all arguments be passed only as keywords the argument must be given a default. This argument most not actually be None.

**kwargs

Additional arguments that are passed through to the AnalysisTools specified in the configuration.

Returns:
resultsStruct

The accumulated results of all the plots and metrics produced by this PipelineTask.

Raises:
ValueError

Raised if the supplied data argument is None

runQuantum(butlerQC, inputRefs, outputRefs)

Override default runQuantum to load the minimal columns necessary to complete the action.

Parameters:
butlerQCQuantumContext

A butler which is specialized to operate in the context of a lsst.daf.butler.Quantum.

inputRefsInputQuantizedConnection

Datastructure whose attribute names are the names that identify connections defined in corresponding PipelineTaskConnections class. The values of these attributes are the lsst.daf.butler.DatasetRef objects associated with the defined input/prerequisite connections.

outputRefsOutputQuantizedConnection

Datastructure whose attribute names are the names that identify connections defined in corresponding PipelineTaskConnections class. The values of these attributes are the lsst.daf.butler.DatasetRef objects associated with the defined output connections.

timer(name: str, logLevel: int = 10) Iterator[None]

Context manager to log performance data for an arbitrary block of code.

Parameters:
namestr

Name of code being timed; data will be logged using item name: Start and End.

logLevelint

A logging level constant.

See also

lsst.utils.timer.logInfo

Implementation function.

Examples

Creating a timer context:

with self.timer("someCodeToTime"):
    pass  # code to time