Prerequisites

  1. LSST software stack.

  2. Shared filesystem for data.

  3. Shared database.

    SQLite3 is fine for small runs like ci_hsc_gen3 if have POSIX filesystem. For larger runs, use PostgreSQL.

  4. A workflow management service.

    Currently, two workflow management services are supported HTCondor’s DAGMan and Pegasus WMS. Both of them requires an HTCondor cluster. NCSA hosts a few of such clusters, see this page for details.

  5. HTCondor’s Python bindings (if using HTCondor) or Pegasus WMS.

Installing Batch Processing Service

Starting from LSST Stack version w_2020_45, the package providing Batch Processing Service, ctrl_bps, comes with lsst_distrib. However, if you’d like to try out its latest features, you may install a bleeding edge version similarly to any other LSST package:

git clone https://github.com/lsst-dm/ctrl_bps
cd ctrl_bps
setup -k -r .
scons

Creating Butler repository

You’ll need a pre-existing Butler dataset repository containing all the input files needed for your run. This repository needs to be on the filesystem shared among all compute resources (e.g. submit and compute nodes) you use during your run.

Note

Keep in mind though, that you don’t need to bootstrap a dataset repository for every BPS run. You only need to do it when Gen3 data definition language (DDL) changes, you want to to start a repository from scratch, and possibly if want to add/change inputs in repo (depending on the inputs and flexibility of the bootstrap scripts).

For testing purposes, you can use pipelines_check package to set up your own Butler dataset repository. To make that repository, follow the usual steps when installing an LSST package:

git clone https://github.com/lsst/pipelines_check
cd pipelines_check
git checkout w_2020_45  # checkout the branch matching the software branch you are using
setup -k -r .
scons

Defining a submission

BPS configuration files are YAML files with some reserved keywords and some special features. They are meant to be syntactically flexible to allow users figure out what works best for them. The syntax and features of a BPS configuration file are described in greater detail in BPS configuration file. Below is just a minimal example to keep you going.

There are groups of information needed to define a submission to the Batch Production Service. They include the pipeline definition, the payload (information about the data in the run), submission and runtime configuration.

Describe a pipeline to BPS by telling it where to find either the pipeline YAML file (recommended)

pipelineYaml: "${OBS_SUBARU_DIR}/pipelines/DRP.yaml:processCcd"

or a pre-made file containing a serialized QuantumGraph, for example

qgraphFile: pipelines_check_w_2020_45.qgraph

Warning

The file with a serialized QuantumGraph is not portable. The file must be crated by the same stack being used when running BPS and it can be only used on the machine with the same environment.

The payload information should be familiar too as it is mostly the information normally used on the pipetask command line (input collections, output collections, etc).

The remaining information tells BPS which workflow management system is being used, how to convert Datasets and Pipetasks into compute jobs and what resources those compute jobs need.

Listing 2 ${CTRL_BPS_DIR}/doc/lsst.ctrl.bps/pipelines_check.yaml
pipelineYaml: "${OBS_SUBARU_DIR}/pipelines/DRP.yaml:processCcd"
templateDataId: "{tract}_{patch}_{band}_{visit}_{exposure}_{detector}"
project: dev
campaign: quick
submitPath: ${PWD}/submit/{outCollection}
computeSite: ncsapool
requestMemory: 2048
requestCpus: 1

# Make sure these values correspond to ones in the bin/run_demo.sh's
# pipetask command line.
payload:
  runInit: true
  payloadName: pcheck
  butlerConfig: ${PIPELINES_CHECK_DIR}/DATA_REPO/butler.yaml
  inCollection: HSC/calib,HSC/raw/all,refcats
  outCollection: "u/${USER}/pipelines_check/{timestamp}"
  dataQuery: exposure=903342 AND detector=10

pipetask:
  pipetaskInit:
    runQuantumCommand: "${CTRL_MPEXEC_DIR}/bin/pipetask --long-log run -b {butlerConfig} -i {inCollection} --output-run {outCollection} --init-only --skip-existing --register-dataset-types --qgraph {qgraphFile} --clobber-partial-outputs --no-versions"
  assembleCoadd:
    requestMemory: 8192

wmsServiceClass: lsst.ctrl.bps.wms.htcondor.htcondor_service.HTCondorService
clusterAlgorithm: lsst.ctrl.bps.quantum_clustering_funcs.single_quantum_clustering
createQuantumGraph: '${CTRL_MPEXEC_DIR}/bin/pipetask qgraph -d "{dataQuery}" -b {butlerConfig} -i {inCollection} -p {pipelineYaml} -q {qgraphFile} --qgraph-dot {qgraphFile}.dot'
runQuantumCommand: "${CTRL_MPEXEC_DIR}/bin/pipetask --long-log run -b {butlerConfig} -i {inCollection} --output-run {outCollection} --extend-run --skip-init-writes --qgraph {qgraphFile} --clobber-partial-outputs --no-versions"

Submitting a run

Submit a run for execution with

bps submit example.yaml

If submission was successfully, it will output something like this:

Submit dir: /home/jdoe/tmp/bps/submit/shared/pipecheck/20201111T13h34m08s
Run Id: 176261

Adding --log-level INFO option to the command line outputs more information especially for those wanting to watch how long the various submission stages take.

Checking status

To check the status of the submitted run, you can use tools provided by HTCondor or Pegasus, for example, condor_status or pegasus-status. To get a more pipeline oriented information use

bps report

which should display run summary similar to the one below

X      STATE  %S       ID OPERATOR   PRJ   CMPGN    PAYLOAD    RUN
-----------------------------------------------------------------------------------------------------------------------
     RUNNING   0   176270 jdoe       dev   quick    pcheck     shared_pipecheck_20201111T14h59m26s

To see results regarding past submissions, use bps report --hist X where X is the number of days past day to look at (can be a fraction). For example

$ bps report --hist 1
        STATE  %S       ID OPERATOR   PRJ   CMPGN    PAYLOAD    RUN
-----------------------------------------------------------------------------------------------------------------------
   FAILED   0   176263 jdoe       dev   quick    pcheck     shared_pipecheck_20201111T13h51m59s
SUCCEEDED 100   176265 jdoe       dev   quick    pcheck     shared_pipecheck_20201111T13h59m26s

Use bps report --help to see all currently supported options.

Terminating running jobs

There currently isn’t a BPS command for terminating jobs.  Instead you can use the condor_rm or pegasus-remove.  Both take the runId printed by bps submit. For example

condor_rm 176270       # HTCondor
pegasus-remove 176270  # Pegasus WMS

bps report also prints the runId usable by condor_rm.

If you want to just clobber all of the runs that you have currently submitted, you can just do the following no matter if using HTCondor or Pegasus WMS plugin:

condor_rm <username>

BPS configuration file

Configuration file can include other configuration files using includeConfigs with YAML array syntax. For example

includeConfigs:
  - bps-operator.yaml
  - bps-site-htcondor.yaml

Values in the configuration file can be defined in terms of other values using {key} syntax, for example

patch: 69
dataQuery: patch = {patch}

Environment variables can be used as well with ${var} syntax, for example

submitRoot: ${PWD}/submit
runQuantumExec: ${CTRL_MPEXEC_DIR}/bin/pipetask

Note

Note the difference, $ (dollar sign), when using an environmental variable, e.g. ${foo}, and plain config variable {foo}.

Section names can be used to store default settings at that concept level which can be overridden by settings at more specific concept levels.  Currently the order from most specific to general is: payload, pipetask, and site.

payload
description of the submission including definition of inputs
pipetask
subsections are pipetask labels where can override/set runtime settings for particular pipetasks (currently no Quantum-specific settings).
site

settings for specific sites can be set here. Subsections are site names which are matched to computeSite. The following are examples for specifying values needed to match jobs to glideins.

HTCondor plugin example:

site:
  acsws02:
    profile:
      condor:
        requirements: "(GLIDEIN_NAME == &quot;test_gname&quot;)"
        +GLIDEIN_NAME: "test_gname"

Pegasus plugin example:

site:
  acsws02:
    arch: x86_64
    os: LINUX
    directory:
      shared-scratch:
        path: /work/shared-scratch/${USER}
        file-server:
          operation: all
          url: file:///work/shared-scratch/${USER}
    profile:
      pegasus:
        style: condor
        auxillary.local: true
      condor:
        universe: vanilla
        getenv: true
        requirements: '(ALLOCATED_NODE_SET == &quot;${NODESET}&quot;)'
        +JOB_NODE_SET: '&quot;${NODESET}&quot;'
      dagman:
        retry: 0
      env:
        PEGASUS_HOME: /usr/local/pegasus/current

Supported settings

butlerConfig
Location of the Butler configuration file needed by BPS to create run collection entry in Butler dataset repository
campaign
A label used to group submissions together. May be used for grouping submissions for particular deliverable (e.g., a JIRA issue number, a milestone, etc.). Can be used as variable in output collection name. Displayed in bps report output.
clusterAlgorithm
Algorithm to use to group Quanta into single Python executions that can share in-memory datastore. Currently, just uses single quanta executions, but this is here for future growth.
computeSite
Specification of the compute site where to run the workflow and which site settings to use in bps prepare).
createQuantumGraph
The command line specification for generating QuantumGraphs.
operator
Name of the Operator who made a submission. Displayed in bps report output. Defaults to the Operator’s username.
pipelineYaml
Location of the YAML file describing the science pipeline.
project
Another label for groups of submissions. May be used to differentiate between test submissions from production submissions. Can be used as a variable in the output collection name. Displayed in bps report output.
requestMemory, optional
Amount of memory, in MB, a single Quantum execution of a particular pipetask will need (e.g., 2048).
requestCpus, optional
Number of cpus that a single Quantum execution of a particular pipetask will need (e.g., 1).
uniqProcName
Used when giving names to graphs, default names to output files, etc.  If not specified by user, BPS tries to use outCollection with ‘/’ replaced with ‘_’.
submitPath
Directory where the output files of bps prepare go.
runQuantumCommand
The command line specification for running a Quantum. Must start with executable name (a full path if using HTCondor plugin) followed by options and arguments. May contain other variables defined in the configuration file.
runInit

Whether to add a pipetask --init-only to the workflow or not. If true, expects there to be a pipetask section called pipetaskInit which contains the runQuantumCommand for the pipetask --init-only. For example

payload:
  runInit: true

pipetask:
  pipetask_init:
    runQuantumCommand: "${CTRL_MPEXEC_DIR}/bin/pipetask --long-log run -b {butlerConfig} -i {inCollection} --output-run {outCollection} --init-only --skip-existing --register-dataset-types --qgraph {qgraph_file} --no-versions"
    requestMemory: 2048
templateDataId
Template to use when creating job names (and HTCondor plugin then uses for job output filenames).
wmsServiceClass

Workload Management Service plugin to use. For example

wmsServiceClass: lsst.ctrl.bps.wms.htcondor.htcondor_service.HTCondorService  # HTCondor

Reserved keywords

gqraphFile

Name of the file with a pre-made, serialized QuantumGraph.

Such a file is an alternative way to describe a science pipeline. However, contrary to YAML specification, it is currently not portable.

timestamp
Created automatically by BPS at submit time that can be used in the user specification of other values (e.g., in output collection names so that one can repeatedly submit the same BPS configuration without changing anything)

Note

Any values shown in the example configuration file, but not covered in this section are examples of user-defined variables (e.g. inCollection) and are not required by BPS.

Troubleshooting

Where is stdout/stderr from pipeline tasks?

For now, stdout/stderr can be found in files in the submit run directory.

HTCondor

The names are of the format:

<run submit dir>/jobs/<task label>/<quantum graph nodeNumber>_<task label>_<templateDataId>[.<htcondor job id>.[sub|out|err|log]

Pegasus WMS

Pegasus does its own directory structure and wrapping of pipetask output.

You can dig around in the submit run directory here too, but try pegasus-analyzer command first.

Advanced debugging

Here are some advanced debugging tips:

  1. If bps submit is taking a long time, probably it is spending the time during QuantumGraph generation.  The QuantumGraph generation command line and output will be in quantumGraphGeneration.out in the submit run directory, e.g. submit/shared/pipecheck/20200806T00h22m26s/quantumGraphGeneration.out.

  2. Check the *.dag.dagman.out for errors (in particular for ERROR: submit attempt failed).

  3. The Pegasus runId is the submit subdirectory where the underlying DAG lives.  If you’ve forgotten the Pegasus runId needed to use in the Pegasus commands try one of the following:

    1. It’s the submit directory in which the braindump.txt file lives.  If you know the submit root directory, use find to give you a list of directories to try.  (Note that many of these directories could be for old runs that are no longer running.)o

      find submit  -name "braindump.txt"
      
    2. Use HTCondor commands to find submit directories for running jobs

      condor_q -constraint 'pegasus_wf_xformation == "pegasus::dagman"' -l | grep Iwd