fluxConservingBrighterFatterCorrection

lsst.ip.isr.fluxConservingBrighterFatterCorrection(exposure, kernel, maxIter, threshold, applyGain, gains=None, correctionMode=True)

Apply brighter fatter correction in place for the image.

This version presents a modified version of the algorithm found in lsst.ip.isr.isrFunctions.brighterFatterCorrection which conserves the image flux, resulting in improved correction of the cores of stars. The convolution has also been modified to mitigate edge effects.

Parameters:
exposurelsst.afw.image.Exposure

Exposure to have brighter-fatter correction applied. Modified by this method.

kernelnumpy.ndarray

Brighter-fatter kernel to apply.

maxIterscalar

Number of correction iterations to run.

thresholdscalar

Convergence threshold in terms of the sum of absolute deviations between an iteration and the previous one.

applyGainBool

If True, then the exposure values are scaled by the gain prior to correction.

gainsdict [str, float]

A dictionary, keyed by amplifier name, of the gains to use. If gains is None, the nominal gains in the amplifier object are used.

correctionModeBool

If True (default) the function applies correction for BFE. If False, the code can instead be used to generate a simulation of BFE (sign change in the direction of the effect)

Returns:
difffloat

Final difference between iterations achieved in correction.

iterationint

Number of iterations used to calculate correction.

Notes

Modified version of lsst.ip.isr.isrFunctions.brighterFatterCorrection.

This correction takes a kernel that has been derived from flat field images to redistribute the charge. The gradient of the kernel is the deflection field due to the accumulated charge.

Given the original image I(x) and the kernel K(x) we can compute the corrected image Ic(x) using the following equation:

Ic(x) = I(x) + 0.5*d/dx(I(x)*d/dx(int( dy*K(x-y)*I(y))))

Improved algorithm at this step applies the divergence theorem to obtain a pixelised correction.

Because we use the measured counts instead of the incident counts we apply the correction iteratively to reconstruct the original counts and the correction. We stop iterating when the summed difference between the current corrected image and the one from the previous iteration is below the threshold. We do not require convergence because the number of iterations is too large a computational cost. How we define the threshold still needs to be evaluated, the current default was shown to work reasonably well on a small set of images.

Edges are handled in the convolution by padding. This is still not a physical model for the edge, but avoids discontinuity in the correction.

Author of modified version: Lance.Miller@physics.ox.ac.uk (see DM-38555).