Getting started with the AP pipeline (Gen 3)¶
This page explains how to set up a Gen 3 data repository that can then be processed with the AP Pipeline (see Running the AP pipeline (Gen 3)). This is appropriate if you are trying to learn the new workflow, and compatibility or integration with other tools is not a problem. The Gen 3 processing is still being finalized, and all details in these tutorials are subject to change.
Installation¶
lsst.ap.pipe is available from the LSST Science Pipelines.
It is installed as part of the lsst_distrib
metapackage, which also includes infrastructure for running the pipeline from the command line.
Ingesting data files¶
Vera Rubin Observatory-style image processing typically operates on Butler repositories and does not directly interface with data files. lsst.ap.pipe is no exception. The process of turning a set of raw data files and corresponding calibration products into a format the Butler understands is called ingestion. Ingestion for the Generation 3 Butler is still being developed, and is outside the scope of the AP Pipeline.
Required data products¶
For the AP Pipeline to successfully process data, the following must be present in a Butler repository:
- Raw science images to be processed.
- Reference catalogs covering at least the area of the raw images. We recommend using Pan-STARRS for photometry and Gaia for astrometry.
- Calibration products (biases, flats, and possibly others, depending on the instrument)
- Template images for difference imaging.
These are of type
deepCoadd
by default, but the AP pipeline can be configured to use other types.
A sample dataset from the DECam HiTS survey that works with ap_pipe
in the The dataset framework format is available as ap_verify_hits2015.
However, raw images from this dataset must be ingested.
Please continue to the Pipeline Tutorial for more details about running the AP Pipeline and interpreting the results.