AssociatedSourcesTractAnalysisTask¶
- class lsst.analysis.tools.tasks.AssociatedSourcesTractAnalysisTask(*, config: PipelineTaskConfig | None = None, log: logging.Logger | LsstLogAdapter | None = None, initInputs: dict[str, Any] | None = None, **kwargs: Any)¶
Bases:
AnalysisPipelineTask
Attributes Summary
Methods Summary
callback
(inputs, dataId)Callback function to be used with reconstructor.
Get the names of the inputs.
Empty (clear) the metadata for this Task and all sub-Tasks.
getBoxWcs
(skymap, tract)Get box that defines tract boundaries.
Get metadata for all tasks.
Get the task name as a hierarchical name including parent task names.
getName
()Get the name of the task.
Get a dictionary of all tasks as a shallow copy.
loadData
(handle[, names])Load the minimal set of keyed data from the input dataset.
makeField
(doc)Make a
lsst.pex.config.ConfigurableField
for this task.makeSubtask
(name, **keyArgs)Create a subtask as a new instance as the
name
attribute of this task.parsePlotInfo
(inputs, dataId[, connectionName])Parse the inputs and dataId to get the information needed to to add to the figure.
prepareAssociatedSources
(skymap, tract, ...)Concatenate source catalogs and join on associated object index.
run
(*[, data])Produce the outputs associated with this
PipelineTask
.runQuantum
(butlerQC, inputRefs, outputRefs)Override default runQuantum to load the minimal columns necessary to complete the action.
timer
(name[, logLevel])Context manager to log performance data for an arbitrary block of code.
Attributes Documentation
- warnings_all = ('divide by zero encountered in divide', 'invalid value encountered in arcsin', 'invalid value encountered in cos', 'invalid value encountered in divide', 'invalid value encountered in log10', 'invalid value encountered in scalar divide', 'invalid value encountered in sin', 'invalid value encountered in sqrt', 'invalid value encountered in true_divide', 'Mean of empty slice')¶
Methods Documentation
- classmethod callback(inputs, dataId)¶
Callback function to be used with reconstructor.
- collectInputNames() Iterable[str] ¶
Get the names of the inputs.
If using the default
loadData
method this will gather the names of the keys to be loaded from an input dataset.- Returns:
- inputs
Iterable
ofstr
The names of the keys in the
KeyedData
object to extract.
- inputs
- static getBoxWcs(skymap, tract)¶
Get box that defines tract boundaries.
- getFullMetadata() TaskMetadata ¶
Get metadata for all tasks.
- Returns:
- metadata
TaskMetadata
The keys are the full task name. Values are metadata for the top-level task and all subtasks, sub-subtasks, etc.
- metadata
Notes
The returned metadata includes timing information (if
@timer.timeMethod
is used) and any metadata set by the task. The name of each item consists of the full task name with.
replaced by:
, followed by.
and the name of the item, e.g.:topLevelTaskName:subtaskName:subsubtaskName.itemName
using
:
in the full task name disambiguates the rare situation that a task has a subtask and a metadata item with the same name.
- getFullName() str ¶
Get the task name as a hierarchical name including parent task names.
- Returns:
- fullName
str
The full name consists of the name of the parent task and each subtask separated by periods. For example:
The full name of top-level task “top” is simply “top”.
The full name of subtask “sub” of top-level task “top” is “top.sub”.
The full name of subtask “sub2” of subtask “sub” of top-level task “top” is “top.sub.sub2”.
- fullName
- getName() str ¶
Get the name of the task.
- Returns:
- taskName
str
Name of the task.
- taskName
See also
getFullName
Get the full name of the task.
- getTaskDict() dict[str, weakref.ReferenceType[lsst.pipe.base.task.Task]] ¶
Get a dictionary of all tasks as a shallow copy.
- Returns:
- taskDict
dict
Dictionary containing full task name: task object for the top-level task and all subtasks, sub-subtasks, etc.
- taskDict
- loadData(handle: DeferredDatasetHandle, names: Iterable[str] | None = None) KeyedData ¶
Load the minimal set of keyed data from the input dataset.
- Parameters:
- handle
DeferredDatasetHandle
Handle to load the dataset with only the specified columns.
- names
Iterable
ofstr
The names of keys to extract from the dataset. If
names
isNone
then thecollectInputNames
method is called to generate the names. For most purposes these are the names of columns to load from a catalog or data frame.
- handle
- Returns:
- result:
KeyedData
The dataset with only the specified keys loaded.
- result:
- classmethod makeField(doc: str) ConfigurableField ¶
Make a
lsst.pex.config.ConfigurableField
for this task.- Parameters:
- doc
str
Help text for the field.
- doc
- Returns:
- configurableField
lsst.pex.config.ConfigurableField
A
ConfigurableField
for this task.
- configurableField
Examples
Provides a convenient way to specify this task is a subtask of another task.
Here is an example of use:
class OtherTaskConfig(lsst.pex.config.Config): aSubtask = ATaskClass.makeField("brief description of task")
- makeSubtask(name: str, **keyArgs: Any) None ¶
Create a subtask as a new instance as the
name
attribute of this task.- Parameters:
- name
str
Brief name of the subtask.
- **keyArgs
Extra keyword arguments used to construct the task. The following arguments are automatically provided and cannot be overridden:
config
.parentTask
.
- name
Notes
The subtask must be defined by
Task.config.name
, an instance ofConfigurableField
orRegistryField
.
- parsePlotInfo(inputs: Mapping[str, Any] | None, dataId: DataCoordinate | None, connectionName: str = 'data') Mapping[str, str] ¶
Parse the inputs and dataId to get the information needed to to add to the figure.
- Parameters:
- inputs: `dict`
The inputs to the task
- dataCoordinate: `lsst.daf.butler.DataCoordinate`
The dataId that the task is being run on.
- connectionName: `str`, optional
Name of the input connection to use for determining table name.
- Returns:
- plotInfo
dict
- plotInfo
- classmethod prepareAssociatedSources(skymap, tract, sourceCatalogs, associatedSources)¶
Concatenate source catalogs and join on associated object index.
- run(*, data: MutableMapping[str, ndarray[Any, dtype[ScalarType]] | Scalar | HealSparseMap] | None = None, **kwargs) Struct ¶
Produce the outputs associated with this
PipelineTask
.- Parameters:
- data
KeyedData
The input data from which all
AnalysisTools
will run and produce outputs. A side note, the python typing specifies that this can be None, but this is only due to a limitation in python where in order to specify that all arguments be passed only as keywords the argument must be given a default. This argument most not actually be None.- **kwargs
Additional arguments that are passed through to the
AnalysisTools
specified in the configuration.
- data
- Returns:
- results
Struct
The accumulated results of all the plots and metrics produced by this
PipelineTask
.
- results
- Raises:
- ValueError
Raised if the supplied data argument is
None
- runQuantum(butlerQC, inputRefs, outputRefs)¶
Override default runQuantum to load the minimal columns necessary to complete the action.
- Parameters:
- butlerQC
QuantumContext
A butler which is specialized to operate in the context of a
lsst.daf.butler.Quantum
.- inputRefs
InputQuantizedConnection
Datastructure whose attribute names are the names that identify connections defined in corresponding
PipelineTaskConnections
class. The values of these attributes are thelsst.daf.butler.DatasetRef
objects associated with the defined input/prerequisite connections.- outputRefs
OutputQuantizedConnection
Datastructure whose attribute names are the names that identify connections defined in corresponding
PipelineTaskConnections
class. The values of these attributes are thelsst.daf.butler.DatasetRef
objects associated with the defined output connections.
- butlerQC