ap_verify command-line reference¶
This page describes the command-line arguments and environment variables used by ap_verify.py. See Running ap_verify from the command line for an overview.
Signature and syntax¶
The basic call signature of ap_verify.py is:
ap_verify.py --dataset DATASET --output WORKSPACE
These two arguments are mandatory, all others are optional (though use of either --gen2
or --gen3
is highly recommended).
Status code¶
ap_verify.py returns 0 on success, and a non-zero value if there were any processing problems.
In --gen2
mode, the status code is the number of data IDs that could not be processed, like for command-line tasks.
With both --gen2
and --gen3
, an uncaught exception may cause ap_verify.py to return an interpreter-dependent nonzero value instead of the above.
Named arguments¶
Required arguments are --dataset
and --output
.
-
--id
<dataId>
¶ Butler data ID.
Specify data ID to process. If using
--gen2
, this should use data ID syntax, such as--id "visit=12345 ccd=1..6 filter=g"
. If using--gen3
, this should use dimension expression syntax, such as--id "visit=12345 and detector in (1..6) and band='g'"
. Consider using--data-query
instead of--id
for forward-compatibility and consistency with Gen 3 pipelines.Multiple copies of this argument are allowed. For compatibility with the syntax used by command line tasks,
--id
with no argument processes all data IDs.If this argument is omitted, then all data IDs in the dataset will be processed.
-
-d
,
--data-query
<dataId>
¶ Butler data ID.
This option is identical to
--id
, and will become the primary data ID argument as Gen 2 is retired. It is recommended over--id
for--gen3
runs.
-
--dataset
<dataset_name>
¶ Input dataset designation.
The input dataset is required for all
ap_verify
runs except when using--help
.The argument is a unique name for the dataset, which can be associated with a repository in the configuration file. See Datasets as input arguments for more information on dataset names.
Allowed names can be queried using the
--help
argument.
-
--dataset-metrics-config
<filename>
¶ Input dataset-level metrics config. (Gen 2 only)
A config file containing a
MetricsControllerConfig
, which specifies which metrics are measured and sets any options. If this argument is omitted,config/default_dataset_metrics.py
will be used.Use
--image-metrics-config
to configure image-level metrics instead. For the Gen 3 equivalent to this option, see--pipeline
. See also Configuring metrics for ap_verify.
-
--db
,
--db_url
¶
Target Alert Production Database
A URI string identifying the database in which to store source associations. The string must be in the format expected by
lsst.dax.apdb.ApdbConfig.db_url
, i.e. an SQLAlchemy connection string. The indicated database is created if it does not exist and this is appropriate for the database type.If this argument is omitted,
ap_verify
creates an SQLite database inside the directory indicated by--output
.
-
--gen2
¶
-
--gen3
¶
Choose Gen 2 or Gen 3 processing.
These optional flags run either the Gen 2 pipeline (
ApPipeTask
), or the Gen 3 pipeline (apPipe.yaml
). If neither flag is provided, the Gen 2 pipeline will be run.Note
The current default is provided for backward-compatibility with old scripts that assumed Gen 2 processing. The default will change to
--gen3
once Gen 3 processing is officially supported by the Science Pipelines, at which point Gen 2 support will be deprecated. Until the default stabilizes, users should be explicit about which pipeline they wish to run.
-
-h
,
--help
¶
Print help.
The help is equivalent to this documentation page, describing command-line arguments.
-
-j
<processes>
,
--processes
<processes>
¶ Number of processes to use.
When
processes
is larger than 1 the pipeline may use the Pythonmultiprocessing
module to parallelize processing of multiple datasets across multiple processors. In Gen 3 mode, data ingestion may also be parallelized.
-
--image-metrics-config
<filename>
¶ Input image-level metrics config. (Gen 2 only)
A config file containing a
MetricsControllerConfig
, which specifies which metrics are measured and sets any options. If this argument is omitted,config/default_image_metrics.py
will be used.Use
--dataset-metrics-config
to configure dataset-level metrics instead. For the Gen 3 equivalent to this option, see--pipeline
. See also Configuring metrics for ap_verify.
-
--metrics-file
<filename>
¶ Output metrics file. (Gen 2 only)
The template for a file to contain metrics measured by
ap_verify
, in a format readable by the lsst.verify framework. The string{dataId}
shall be replaced with the data ID associated with the job, and its use is strongly recommended. If omitted, the output will go to files named afterap_verify.{dataId}.verify.json
in the user’s working directory.
-
--output
<workspace_dir>
¶ Output and intermediate product path.
The output argument is required for all
ap_verify
runs except when using--help
.The workspace will be created if it does not exist, and will contain both input and output repositories required for processing the data. The path may be absolute or relative to the current working directory.
-
-p
,
--pipeline
<filename>
¶ Custom ap_verify pipeline. (Gen 3 only)
A pipeline definition file containing a custom verification pipeline. If omitted,
pipelines/ApVerify.yaml
will be used.The most common use for a custom pipeline is adding or removing metrics to be run along with the AP pipeline.
Note
At present, ap_verify assumes that the provided pipeline is some superset of the AP pipeline. It will likely crash if any AP tasks are missing.
For the Gen 2 equivalent to this option, see
--dataset-metrics-config
and--image-metrics-config
.