Design goals¶
lsst.faro
is designed to efficiently quantify at LSST scale the scientific performance of data products generated by the Science Pipelines for data units of varying granularity, ranging from single-detector to full-survey summary statistics, and persist the results as scalar metric values alongside the input data products.
Intended uses:
- Generating artifacts to verify DMSR, OSS, and LSR science performance metrics
- Computing additional non-normative metrics for science validation
- Performance monitoring and regression analysis, and generation of characterization metric reports for the Science Pipelines
- Providing “first-look” data quality analysis capability to inform observatory operations
Note that lsst.faro
is NOT itself a visualization tool, but rather generates scalar metric values that could be used as input to visualization tools.
Design concepts¶
lsst.faro
builds upon Science Pipelines infrastructure:
- Task framework
- Generation 3 middleware including PipelineTask and data Butler.
- lsst.verify framework
Users are encouraged to consult the documentation for these Science Pipelines components as context for implementation details described in this section.
Overall strategy¶
faro
persists computed scalar metric values as lsst.verify.Measurement objects in the Butler repository alongside the associated input data products; each measurement has an associated searchable data ID. This approach follows the recommendations of the Survey of Provenance for metrics-level provenance.
Survey-scale summary statistics are computed by aggregating intermediate measurements made on smaller units of data (e.g., set of individual visits, set of individual tracts) using the persisted lsst.verify.Measurement
objects as input. The summary statistics are also stored as lsst.verify.Measurement
objects in the data butler.
Each unit of data (e.g., source catalog for an individual visit) corresponds to a Quantum of processing, i.e., a discrete unit of work in the Science Pipelines. These quanta can be executed in parallel if the calculations are independent of each other, allowing the possibility to efficiently compute performance metrics for petascale datasets that will be produced by LSST.
We use the concept of analysis contexts to refer to the various types of input data units corresponding to the granularity of metric computation (e.g., per-detector, per-visit, per-patch, per-tract). faro
supports metric calculation for multiple analysis contexts.
Modular design:
- Configurable to run a subset of metrics on a subset of data products; build pipelines in configuration; run the same metric multiple times with different configurations as needed; configuration is persisted with the same dataId as the metric measurement in the data butler.
- For a given analysis context, once the base classes to manage data IO are defined, users can add metrics by creating a dedicated
Task
to perform the particular operations on in-memory python objects. By design, the details of managing parallel and sequential metric calculcation stages and data IO are abstracted away and developers can focus on the algorithmic implementation of the metric.
Three stages of metric calculation¶
In general, metrics are computed in three stages. Every metric calculation includes the measurement stage. Not all metrics will require the preparation and summary stages, depending on the complexity of the particular metric calculation. The three stages are as follows.
- Preparation: assembles an intermediate data product that may be needed as input to the measurement step.
- Measurement: computes the value of the metric for the specified analysis context and stores it as an
lsst.verify.Measurement
object. - Summary: Generate a summary statistic based on a collection of input measurements generated by the Measurement step. Store as a single
lsst.verify.Measurement
object.
Example implementation: consider the photometric repeatability requirement PA1 that characterizes the dispersion across an ensemble of flux measurements made on individual visits (i.e., source detections) for a given astronomical object.
- During the preparation stage, for each tract and band, create a matched source catalog that associates the source detections from individual visits into groups where each group corresponds to an astronomical object. This matched source catalog is persisted as SimpleCatalog that can readily transformed into a GroupView object.
- During the measurement stage, for each tract and band, load the matched source catalog into memory and compute the photometric repeatability metric for that set of grouped flux measurements. Persist the output metric value.
- During the summary stage, for each band, load the measured metric values from the ensemble of individual tracts and compute a summary statistic (e.g., mean, median). Persist the output metric value that now characterizes the overall performance for the dataset for that band.
At each stage, each discrete unit of data processing corresponds to a single quantum of execution in the Science Pipelines that can be executed in parallel. The outputs at each stage are each assigned a dataId that can be used to retrieve the output using the Butler. The metric calculation stages can be tied together to run serially as part of a single pipeline.
Main components¶
The structure of faro
code includes two main components:
- Collection of Tasks that compute specific metric values of interest.
Each metric has an associated lsst.pipe.base.Task class that computes a scalar value based on data previously written to a Butler repository (i.e., faro
runs as afterburner to the Science Pipelines). The lsst.pipe.base.Task
for metric measurement works with in-memory python objects and does NOT perform IO with a data butler.
- Set of base classes for various analysis contexts that use Gen3 middleware to understand how to build a quantum graph and interact with data butler. A lst of currently implemented is below.
The lsst.verify
package contains base classes MetricConnections, MetricConfig, and MetricTask that are used for generating scalar metric values (lsst.verify.Measurement
) given input data. This structure follows the general pattern adopted in the Science Pipelines of using PipelineTaskConnections to define the desired IO, PipelineTaskConfig to provide configuration, and PipelineTask to run an algorithm on input data and store output data in a data butler.
The primary base classes in the lsst.faro
package, CatalogMeasurementBaseConnections
, CatalogMeasurementBaseConfig
, and CatalogMeasurementBaseTask
, inherit from MetricConnections
, MetricConfig
, and MetricTask
, respectively, and add general functionality for computing science performance metrics based on source/object catalog inputs. See CatalogMeasurementBase.py.
Each analysis context in the lsst.faro
package uses a subclass of each of CatalogMeasurementBaseConnections
, CatalogMeasurementBaseConfig
, and CatalogMeasurementBaseTask
to manage the particular inputs and outputs for the relevant type of data unit for that analysis context. For example see VisitTableMeasurement.py for the case of metric calculation on per-visit source catalogs. All the interactions with the data butler occur in the runQuantum
method of the measurement task base class for each analysis context. The in-memory python objects are then passed to the run
method.
For a given analysis context, selecting a specific metric to run is accomplished in configuration by retargeting the generic subtask of, e.g., VisitTableMeasurementTask
, with the particular instance of lsst.pipe.base.Task
for that metric. In this way, a large set of metrics can be readily computed from a set of common data inputs.
Currently implemented analysis contexts¶
Currently implemented analysis contexts are listed below. The associated measurement task base class for each analysis context is indicated. Note that the faro
team is currently converting all metrics to use parquet file inputs, and will deprecate the use of FITS files. The base classes for the various analysis contexts are located in the python/lsst/faro/measurement
directory.
- Metrics computed using per-detector source catalogs (i.e., single-visit detections)
- FITS file input (
src
):DetectorMeasurementTask
- parquet file input (
sourceTable_visit
):DetectorTableMeasurementTask
- FITS file input (
- Metrics computed using per-visit source catalogs (i.e., single-visit detections)
- FITS file input (
src
):VisitMeasurementTask
- parquet file input (
sourceTable_visit
):VisitTableMeasurementTask
- FITS file input (
- Metrics computed using per-patch object catalogs (i.e., coadd detections)
- Per-band FITS file input (
deepCoadd_forced_src
):PatchMeasurementTask
- Per-band parquet file input (
objectTable_tract
):PatchTableMeasurementTask
- Multi-band parquet file input (
objectTable_tract
):PatchMultiBandTableMeasurementTask
- Per-band parquet file input (
- Metrics computed using per-tract object catalogs (i.e., coadd detections)
- Per-band FITS file input (
deepCoadd_forced_src
):TractMeasurementTask
- Multi-band FITS file input (
deepCoadd_forced_src
):TractMultiBandMeasurementTask
- Per-band parquet file input (
objectTable_tract
):TractTableMeasurementTask
- Multi-band parquet file input (
objectTable_tract
):TractMultiBandTableMeasurementTask
- Per-band FITS file input (
- Metrics computed using per-patch matched source catalogs (i.e., set of single-visit detections of the same objects)
- Per-band FITS file input:
PatchMatchedMeasurementTask
- Multi-band FITS file input:
PatchMatchedMultiBandMeasurementTask
- Per-band FITS file input:
- Metrics computed using per-tract matched source catalogs (i.e., set of single-visit detections of the same objects)
- Per-band FITS file input:
TractMatchedMeasurementTask
- Per-band FITS file input:
Organization of the faro package¶
Directory structure¶
python
python/lsst/faro/base
: contains base classes used throughout the package.python/lsst/faro/preparation
: contains classes that generate intermediate data products.python/lsst/faro/measurement
: contains classes to generate metric values. Each measurement produces one scalarlsst.verify.Measurement
per unit of data (e.g., per tract, per patch).python/lsst/faro/summary
: contains classes that take a collection oflsst.verify.Measurement
objects as input and produce a single scalarlsst.verify.Measurement
that is an aggregation (e.g., mean, median, rms) of the per-tract, per-patch, etc. metrics.python/lsst/faro/utils
: contains utility classes and functions that may be used in multiple instances throughout the package.
pipelines
: contains yaml files to configure which metrics are run as part of a pipeline and the detailed execution parameters for metric calculations. Pipelines can be built hierarchically. The organization of the pipeline directories mirrors the organization of the python directories.config
: contains general configuration for thelsst.faro
package (e.g., mappings between bands/filters to facilitate calculation of color terms)bin
andbin.sh
: contain scripts for exporting metrics to SQuaSH.doc
: contains package documentation. For example, the code used to create this high-level package documentation resides in thedoc/lsst.faro
directory.tests
: contains unit tests as well as input data for the unit tests.
Naming conventions¶
lsst.faro
uses camelCase variable names.
References and prior art¶
lsst.faro
builds on concepts, designs, and recommendations in the following documents:
- The
lsst.verify
framework for computing data quality metrics, described in DMTN-098 and DMTN-057. - Recommendations for metrics-level provenance, as described in DMTN-185.
- Relevant recommendations of the QA Strategy Working Group, as described in DMTN-085.
- The Gen2-based
lsst.validate_drp
package for computing science performance metrics, described in DMTN-008.- The algorithms implemented in
validate_drp
were initially ported as-is to run inlsst.faro
.validate_drp
is now deprecated; all future development of metrics will be carried out inlsst.faro
. Many of the algorithms have been updated since the initial transition tolsst.faro
.
- The algorithms implemented in