# Getting started tutorial part 6: measuring sources¶

In this step of the tutorial series you’ll measure the coadditions you assembled in part 5 to build catalogs of stars and galaxies. This is the measurement strategy:

## Set up¶

Pick up your shell session where you left off in part 5. For convenience, start in the top directory of the example git repository.

cd $RC2_SUBSET_DIR  The lsst_distrib package also needs to be set up in your shell environment. See Setting up installed LSST Science Pipelines for details on doing this. ## Run detection pipeline task¶ Processing will be done in two blocks with two different pipelines. The first will do steps 1 through 4 in the introduction. The end result will be calibrated coadd object measurements and calibrated coadd exposures. The following code executes steps 1 through 4 of the steps outlined in the introduction to this tutorial. The code is executed here, but more information on each of the steps is included in sections following this code block. The fifth and final step is executed in this section. pipetask run -b SMALL_HSC/butler.yaml \ -d "tract = 9813 AND skymap = 'hsc_rings_v1' AND patch in (38, 39, 40, 41)" \ -p 'pipelines/DRP.yaml#coadd_measurement' \ -i u/$USER/coadds \
--register-dataset-types \
-o u/$USER/coadd_meas  Notice that since this task operates on coadds, we can select the coadds using the tract, and patch data ID keys. In past sections, the examples left off the -d argument in order to process all available data. This example, however, is selecting just four of the patches for this step. Some algorithms are sensitive to how images are arranged on the sky. For example, some algorithms expect multiple images to overlap, or multi-band coverage. Those four patches have coverage from all 40 visits in the tutorial repository which means there doesn’t need to be as much fine tuning to configurations, and we can process these patches just as the large scale HSC processing is done. As with previous examples, the outputs will go in a collection placed under a namespace defined by your username. ### Detecting sources in coadded images¶ To start, detect sources in the coadded images to take advantage of their depth and high signal-to-noise ratio. The detection subset is responsible for producing calibrated measurements from the input coadds. Detection is done on each band and patch separately. The resulting datasets are the deepCoadd_det detections and the deepCoadd_calexp calibrated coadd exposures. ### Merging multi-band detection catalogs¶ Merging the detections from the multiple bands used to produce the coadds allows later steps to use multi-band information in their processing: e.g. deblending. The mergeDetections subset created a deepCoadd_mergeDet dataset, which is a consistent table of sources across all filters. ### Deblending and measuring source catalogs on coadds¶ Seeded by the deepCoadd_mergeDet, the deblender works on each detection to find the flux in each component. Because it has information from multiple bands, the deblender can use color information to help it work out how to separate the flux into different components. See the SCARLET paper for further reading. The deblend subset produces the deepCoadd_deblendedFlux data product. The measure subset is responsible for measuring object properties on all of the deblended children produced by the deblender. This produces the deepCoadd_meas catalog data product with flux and shape measurement information for each object. You’ll see how to access these tables later. ### Merging multi-band source catalogs from coadds¶ After measurement the single band deblended and measured objects in single bands can again be merged into a single catalog. Merging the single band detection catalogs into a single multi-band catalog allows for more complete and consistent multi-band photometry by measuring the same source in multiple bands at a fixed position (the forced photometry method) rather than fitting the source’s location individually for each band. For forced photometry you want to use the best position measurements for each source, which could be from different filters depending on the source. We call the filter that best measures a source the reference filter. The mergeMeasurements created a deepCoadd_ref dataset. This is the seed catalog for computing forced photometry. ## Running forced photometry on coadds¶ Now you have accurate positions for all detected sources in the coadds. Re-measure the coadds using these fixed source positions (the forced photometry method) to create the best possible photometry of sources in your coadds: pipetask run -b SMALL_HSC/butler.yaml \ -d "tract = 9813 AND skymap = 'hsc_rings_v1' AND patch in (38, 39, 40, 41)" \ -p 'pipelines/DRP.yaml#forced_objects' \ -i u/$USER/coadd_meas \
--register-dataset-types \
-o u/\$USER/objects


As above, this selects just the patches that have full coverage.

The forced_objects subset of pipelines does several things:

1. Forced photometry on the coadds resulting in the deepCoadd_forced_src dataset
2. Forced photometry on the input single frame calibrated exposures, the forced_src dataset
3. Finally, it combines all object level forced measurements into a single tract scale catalog resulting in the objectTable_tract dataset

## Wrap up¶

In this tutorial, you’ve created forced photometry catalogs of sources in coadded images. Here are some key takeaways:

• Forced photometry is a method of measuring sources in several bandpasses using a common source list.

Continue this tutorial series in part 7 where you will analyze and plot the source catalogs that you’ve just measured.