lsst.ctrl.mpexec

Command Line Scripts

The pipetask command is being ported from an argparse framework to a Click framework. During development the command implemented using Click is called pipetask2. At some point the current pipetask command will be removed and pipetask2 will be renamed to pipetask.

pipetask2

pipetask2 [OPTIONS] COMMAND [ARGS]...

Options

--log-level <log_level>

The Python log level to use.

--long-log

Make log messages appear in long format.

build

Build and optionally save pipeline definition.

This does not require input data to be specified.

pipetask2 build [OPTIONS]

Options

--log-level <log_level>

The Python log level to use.

--show <ITEM|ITEM=VALUE>

Dump various info to standard output. Possible items are: config, config=[Task::] or config=[Task::]<PATTERN>:NOIGNORECASE to dump configuration fields possibly matching given pattern and/or task label; history= to dump configuration history for a field, field name is specified as [Task::][SubTask.]Field; dump-config, dump-config=Task to dump complete configuration for a task given its label or all tasks; pipeline to show pipeline composition; graph to show information about quanta; workflow to show information about quanta and their dependency; tasks to show task composition.

--option-section-c3857a7d-89a8-4ed4-9934-70a3ed4f5b5f <option_section_c3857a7d_89a8_4ed4_9934_70a3ed4f5b5f>
-p, --pipeline <pipeline>

Location of a pipeline definition file in YAML format.

-t, --task <TASK[:LABEL>

Task name to add to pipeline, must be a fully qualified task name. Task name can be followed by colon and label name, if label is not given then task base name (class name) is used as label.

--delete <LABEL>

Delete task with given label from pipeline.

-c, --config <LABEL:NAME=VALUE>

Config override, as a key-value pair.

-C, --config-file <LABEL:FILE>

Configuration override file(s), applies to a task with a given label.

--order-pipeline

Order tasks in pipeline based on their data dependencies, ordering is performed as last step before saving or executing pipeline.

-s, --save-pipeline <save_pipeline>

Location for storing resulting pipeline definition in YAML format.

--pipeline-dot <pipeline_dot>

“Location for storing GraphViz DOT representation of a pipeline.

-i, --instrument <instrument>

Add an instrument which will be used to load config overrides when defining a pipeline. This must be the fully qualified class name.

--option-section-a2364153-a3be-4d9e-9c2f-53d5ee267d9a <option_section_a2364153_a3be_4d9e_9c2f_53d5ee267d9a>

Notes:

–task, –delete, –config, –config-file, and –instrument action options can appear multiple times; all values are used, in order left to right.

FILE reads command-line options from the specified file. Data may be distributed among multiple lines (e.g. one option per line). Data after # is treated as a comment and ignored. Blank lines and lines starting with # are ignored.)

qgraph

Build and optionally save quantum graph.

pipetask2 qgraph [OPTIONS]

Options

--log-level <log_level>

The Python log level to use.

--show <ITEM|ITEM=VALUE>

Dump various info to standard output. Possible items are: config, config=[Task::] or config=[Task::]<PATTERN>:NOIGNORECASE to dump configuration fields possibly matching given pattern and/or task label; history= to dump configuration history for a field, field name is specified as [Task::][SubTask.]Field; dump-config, dump-config=Task to dump complete configuration for a task given its label or all tasks; pipeline to show pipeline composition; graph to show information about quanta; workflow to show information about quanta and their dependency; tasks to show task composition.

--option-section-7389e346-b18a-4095-89d7-586b47f3ef17 <option_section_7389e346_b18a_4095_89d7_586b47f3ef17>
-p, --pipeline <pipeline>

Location of a pipeline definition file in YAML format.

-t, --task <TASK[:LABEL>

Task name to add to pipeline, must be a fully qualified task name. Task name can be followed by colon and label name, if label is not given then task base name (class name) is used as label.

--delete <LABEL>

Delete task with given label from pipeline.

-c, --config <LABEL:NAME=VALUE>

Config override, as a key-value pair.

-C, --config-file <LABEL:FILE>

Configuration override file(s), applies to a task with a given label.

--order-pipeline

Order tasks in pipeline based on their data dependencies, ordering is performed as last step before saving or executing pipeline.

-s, --save-pipeline <save_pipeline>

Location for storing resulting pipeline definition in YAML format.

--pipeline-dot <pipeline_dot>

“Location for storing GraphViz DOT representation of a pipeline.

-i, --instrument <instrument>

Add an instrument which will be used to load config overrides when defining a pipeline. This must be the fully qualified class name.

--option-section-870f2318-affc-4a25-8ada-8466c373c10f <option_section_870f2318_affc_4a25_8ada_8466c373c10f>
-g, --qgraph <qgraph>

Location for a serialized quantum graph definition (pickle file). If this option is given then all input data options and pipeline-building options cannot be used.

--skip-existing

If all Quantum outputs already exist in the output RUN collection then that Quantum will be excluded from the QuantumGraph. Requires the ‘run` command’s --extend-run flag to be set.

-q, --save-qgraph <save_qgraph>

Location for storing a serialized quantum graph definition (pickle file).

--save-single-quanta <save_single_quanta>

Format string of locations for storing individual quantum graph definition (pickle files). The curly brace {} in the input string will be replaced by a quantum number.

--qgraph-dot <qgraph_dot>

Location for storing GraphViz DOT representation of a quantum graph.

--option-section-425d4686-186a-4865-a48d-06b0f5f7c4e9 <option_section_425d4686_186a_4865_a48d_06b0f5f7c4e9>
-b, --butler-config <butler_config>

Location of the gen3 butler/registry config file.

--input <COLL,DSTYPE:COLL>

Comma-separated names of the input collection(s). Entries may include a colon (:), the first string is a dataset type name that restricts the search in that collection.

-o, --output <COLL>

Name of the output CHAINED collection. This may either be an existing CHAINED collection to use as both input and output (incompatible with –input), or a new CHAINED collection created to include all inputs (requires –input). In both cases, the collection’s children will start with an output RUN collection that directly holds all new datasets (see –output-run).

--output-run <COLL>

Name of the new output RUN collection. If not provided then –output must be provided and a new RUN collection will be created by appending a timestamp to the value passed with –output. If this collection already exists then –extend-run must be passed.

--extend-run

Instead of creating a new RUN collection, insert datasets into either the one given by –output-run (if provided) or the first child collection of - -output(which must be of type RUN).

--replace-run

Before creating a new RUN collection in an existing CHAINED collection, remove the first child collection (which must be of type RUN). This can be used to repeatedly write to the same (parent) collection during development, but it does not delete the datasets associated with the replaced run unless –prune-replaced is also passed. Requires –output, and incompatible with –extend-run.

--prune-replaced <prune_replaced>

Delete the datasets in the collection replaced by –replace-run, either just from the datastore (‘unstore’) or by removing them and the RUN completely (‘purge’). Requires –replace-run.

Options:unstore|purge
-d, --data-query <QUERY>

User data selection expression.

--option-section-4defc2c4-d4d1-4cac-8a53-583a8d3f030c <option_section_4defc2c4_d4d1_4cac_8a53_583a8d3f030c>

Options marked with (f) are forwarded to the next subcommand if multiple subcommands are chained in the same command execution. Previous values may be overridden by passing new option values into the next subcommand.

run

Build and execute pipeline and quantum graph.

pipetask2 run [OPTIONS]

Options

--log-level <log_level>

The Python log level to use.

--debug

Enable debugging output using lsstDebug facility (imports debug.py).

--show <ITEM|ITEM=VALUE>

Dump various info to standard output. Possible items are: config, config=[Task::] or config=[Task::]<PATTERN>:NOIGNORECASE to dump configuration fields possibly matching given pattern and/or task label; history= to dump configuration history for a field, field name is specified as [Task::][SubTask.]Field; dump-config, dump-config=Task to dump complete configuration for a task given its label or all tasks; pipeline to show pipeline composition; graph to show information about quanta; workflow to show information about quanta and their dependency; tasks to show task composition.

--option-section-0719982d-6932-4bcd-94e8-04b0b6c0aaa2 <option_section_0719982d_6932_4bcd_94e8_04b0b6c0aaa2>
-p, --pipeline <pipeline>

Location of a pipeline definition file in YAML format.

-t, --task <TASK[:LABEL>

Task name to add to pipeline, must be a fully qualified task name. Task name can be followed by colon and label name, if label is not given then task base name (class name) is used as label.

--delete <LABEL>

Delete task with given label from pipeline.

-c, --config <LABEL:NAME=VALUE>

Config override, as a key-value pair.

-C, --config-file <LABEL:FILE>

Configuration override file(s), applies to a task with a given label.

--order-pipeline

Order tasks in pipeline based on their data dependencies, ordering is performed as last step before saving or executing pipeline.

-s, --save-pipeline <save_pipeline>

Location for storing resulting pipeline definition in YAML format.

--pipeline-dot <pipeline_dot>

“Location for storing GraphViz DOT representation of a pipeline.

-i, --instrument <instrument>

Add an instrument which will be used to load config overrides when defining a pipeline. This must be the fully qualified class name.

--option-section-f7c0bfd1-7327-4838-9dff-0b9c8be6d472 <option_section_f7c0bfd1_7327_4838_9dff_0b9c8be6d472>
-g, --qgraph <qgraph>

Location for a serialized quantum graph definition (pickle file). If this option is given then all input data options and pipeline-building options cannot be used.

--skip-existing

If all Quantum outputs already exist in the output RUN collection then that Quantum will be excluded from the QuantumGraph. Requires the ‘run` command’s --extend-run flag to be set.

-q, --save-qgraph <save_qgraph>

Location for storing a serialized quantum graph definition (pickle file).

--save-single-quanta <save_single_quanta>

Format string of locations for storing individual quantum graph definition (pickle files). The curly brace {} in the input string will be replaced by a quantum number.

--qgraph-dot <qgraph_dot>

Location for storing GraphViz DOT representation of a quantum graph.

--option-section-9d7974fa-2c3b-4f4f-b8f4-70367bdf76da <option_section_9d7974fa_2c3b_4f4f_b8f4_70367bdf76da>
-b, --butler-config <butler_config>

Location of the gen3 butler/registry config file.

--input <COLL,DSTYPE:COLL>

Comma-separated names of the input collection(s). Entries may include a colon (:), the first string is a dataset type name that restricts the search in that collection.

-o, --output <COLL>

Name of the output CHAINED collection. This may either be an existing CHAINED collection to use as both input and output (incompatible with –input), or a new CHAINED collection created to include all inputs (requires –input). In both cases, the collection’s children will start with an output RUN collection that directly holds all new datasets (see –output-run).

--output-run <COLL>

Name of the new output RUN collection. If not provided then –output must be provided and a new RUN collection will be created by appending a timestamp to the value passed with –output. If this collection already exists then –extend-run must be passed.

--extend-run

Instead of creating a new RUN collection, insert datasets into either the one given by –output-run (if provided) or the first child collection of - -output(which must be of type RUN).

--replace-run

Before creating a new RUN collection in an existing CHAINED collection, remove the first child collection (which must be of type RUN). This can be used to repeatedly write to the same (parent) collection during development, but it does not delete the datasets associated with the replaced run unless –prune-replaced is also passed. Requires –output, and incompatible with –extend-run.

--prune-replaced <prune_replaced>

Delete the datasets in the collection replaced by –replace-run, either just from the datastore (‘unstore’) or by removing them and the RUN completely (‘purge’). Requires –replace-run.

Options:unstore|purge
-d, --data-query <QUERY>

User data selection expression.

--option-section-c1c55836-9280-4f84-8dfe-d054b3b4ecd3 <option_section_c1c55836_9280_4f84_8dfe_d054b3b4ecd3>
--clobber-partial-outputs

Remove incomplete outputs from previous execution of the same quantum before new execution.

--do-raise

Raise an exception on error. (else log a message and continue?)

--profile <profile>

Dump cProfile statistics to file name.

-j, --processes <processes>

Number of processes to use.

--timeout <timeout>

Timeout for multiprocessing; maximum wall time (sec).

--fail-fast

Stop processing at first error, default is to process as many tasks as possible.

--graph-fixup <graph_fixup>

Name of the class or factory method which makes an instance used for execution graph fixup.

--option-section-845dbb05-7cd2-4ef9-9580-d9851dc74a8b <option_section_845dbb05_7cd2_4ef9_9580_d9851dc74a8b>
--skip-init-writes

Do not write collection-wide ‘init output’ datasets (e.g.schemas).

--init-only

Do not actually run; just register dataset types and/or save init outputs.

--register-dataset-types

Register DatasetTypes that do not already exist in the Registry.

--no-versions

Do not save or check package versions.

--option-section-c3ffad59-7e49-4dae-a8d3-8f0297d2fb54 <option_section_c3ffad59_7e49_4dae_a8d3_8f0297d2fb54>

Options marked with (f) are forwarded to the next subcommand if multiple subcommands are chained in the same command execution. Previous values may be overridden by passing new option values into the next subcommand.

Contributing

lsst.ctrl.mpexec is developed at https://github.com/lsst/ctrl_mpexec. You can find Jira issues for this module under the ctrl_mpexec component.

Python API reference

lsst.ctrl.mpexec Package

Functions

graph2dot(qgraph, file) Convert QuantumGraph into GraphViz digraph.
makeParser([fromfile_prefix_chars, parser_class]) Make instance of command line parser for CmdLineFwk.
pipeline2dot(pipeline, file) Convert Pipeline into GraphViz digraph.

Classes

CmdLineFwk() PipelineTask framework which executes tasks from command line.
ExecutionGraphFixup Interface for classes which update quantum graphs before execution.
MPGraphExecutor(numProc, timeout, …[, …]) Implementation of QuantumGraphExecutor using same-host multiprocess execution of Quanta.
MPGraphExecutorError Exception class for errors raised by MPGraphExecutor.
MPTimeoutError Exception raised when task execution times out.
PreExecInit(butler, taskFactory[, skipExisting]) Initialization of registry for QuantumGraph execution.
QuantumExecutor Class which abstracts execution of a single Quantum.
QuantumGraphExecutor Class which abstracts QuantumGraph execution.
SingleQuantumExecutor(taskFactory[, …]) Executor class which runs one Quantum at a time.
TaskFactory Class instantiating PipelineTasks.