File LeastSquares.h

namespace lsst

Class for a simple mapping implementing a generic AstrometryTransform.

Remove all non-astronomical counts from the Chunk Exposure’s pixels.

Forward declarations for lsst::utils::Cache

For details on the Cache class, see the Cache.h file.

It uses a template rather than a pointer so that the derived classes can use the specifics of the transform. The class simplePolyMapping overloads a few routines.

A base class for image defects

Numeric constants used by the Integrate.h integrator routines.

Compute Image Statistics

Note

Gauss-Kronrod-Patterson quadrature coefficients for use in quadpack routine qng. These coefficients were calculated with 101 decimal digit arithmetic by L. W. Fullerton, Bell Labs, Nov 1981.

Note

The Statistics class itself can only handle lsst::afw::image::MaskedImage() types. The philosophy has been to handle other types by making them look like lsst::afw::image::MaskedImage() and reusing that code. Users should have no need to instantiate a Statistics object directly, but should use the overloaded makeStatistics() factory functions.

namespace afw
namespace math
class LeastSquares
#include <LeastSquares.h>

Solver for linear least-squares problems.

Linear least-squares problems are defined as finding the vector \(x\) that minimizes \(\left|A x - b\right|_2\), with the number of rows of \(A\) generally greater than the number of columns. We call \(A\) the design matrix, \(b\) the data vector, and \(x\) the solution vector. When the rank of \(A\) is equal to the number of columns, we can obtain using the solution using the normal equations:

\[ A^T A x = A^T b \]
If \(A\) is not full-rank, the problem is underconstrained, and we usually wish to solve the minimum-norm least-squares problem, which also minimizes \(|x|_2\). This can be done by computing a pseudo-inverse of \(A\) using an SVD or complete orthogonal factorization of \(A\), or by performing an Eigen decomposition of \(A^T A\).

This class can be constructed from the design matrix and data vector, or from the two terms in the normal equations (below, we call the matrix \(A^TA\) the Fisher matrix, and the vector \(A^T b\) simply the “right-hand side” (RHS) vector). If initialized with the design matrix and data vector, we can still use the normal equations to solve it. The solution via the normal equations is more susceptible to round-off error, but it is also faster, and if the normal equation terms can be computed directly it can be significantly less expensive in terms of memory. The Fisher matrix is a symmetric matrix, and it should be exactly symmetric when provided as input, because which triangle will be used is an implementation detail that is subject to change and may depend on the storage order.

The solver always operates in double precision, and returns all results in double precision. However, it can be initialized from single precision inputs. It isn’t usually a good idea to construct the normal equations in single precision, however, even when the data are single precision.

Public Types

enum Factorization

Private implementation; forward-declared publicly so we can inherit from it in .cc.

Values:

NORMAL_EIGENSYSTEM

Use the normal equations with a symmetric Eigensystem decomposition.

This method is fully robust and computes the minimum-norm solution when the problem does not have full rank. It is affected slightly more by round-off error than the SVD method, but because it can handle singular matrices this usually isn’t a problem.

NORMAL_CHOLESKY

Use the normal equations with a Cholesky decomposition.

While this method uses a robust LDL^T decomposition that does not require square roots, it is not appropriate for problems that do not have full rank, and cannot be used to determine whether a problem has full rank. It is the fastest decomposition.

DIRECT_SVD

Use a thin singular value decomposition of the design matrix.

This method is the most robust and computes the minimum-norm solution the problem does not have full rank. However, it is also the slowest and is not available when the solver is initialized with the normal equations.

Public Functions

template<typename T1, typename T2, int C1, int C2>
void setDesignMatrix(ndarray::Array<T1, 2, C1> const &design, ndarray::Array<T2, 1, C2> const &data)

Reset the design matrix and data vector given as ndarrays; dimension must not change.

template<typename D1, typename D2>
void setDesignMatrix(Eigen::MatrixBase<D1> const &design, Eigen::MatrixBase<D2> const &data)

Reset the design matrix and data vector given as Eigen objects; dimension must not change.

template<typename T1, int C1>
void setDesignMatrix(ndarray::Array<T1, 2, C1> const &design)

Reset the design matrix given as an ndarray; dimension and data are not changed.

template<typename D1, typename D2>
void setDesignMatrix(Eigen::MatrixBase<D1> const &design)

Reset the design matrix given as an Eigen object; dimension and data are not changed.

template<typename T1, typename T2, int C1, int C2>
void setNormalEquations(ndarray::Array<T1, 2, C1> const &fisher, ndarray::Array<T2, 1, C2> const &rhs)

Reset the terms in the normal equations given as ndarrays; dimension must not change.

template<typename D1, typename D2>
void setNormalEquations(Eigen::MatrixBase<D1> const &fisher, Eigen::MatrixBase<D2> const &rhs)

Reset the terms in the normal equations given as Eigen objects; dimension must not change.

void setThreshold(double threshold)

Set the threshold used to determine when to truncate Eigenvalues.

The rank of the matrix is determined by comparing the product of this threshold and the first (largest) element of the array returned by getDiagnostic() to all other elements of that array. Elements smaller than this product are ignored and reduce the rank.

In the DIRECT_SVD case, the diagnostic array contains the singular values of the design matrix, while in the NORMAL_EIGENSYSTEM case the diagnostic array holds the Eigenvalues of the Fisher matrix, and the latter are the square of the former. The default threshold is the same (std::numeric_limits<double>::epsilon()) in both cases, reflecting the fact that using the normal equations squares the condition number of the problem.

The NORMAL_CHOLESKY method does not use the threshold and assumes the problem is full-rank.

double getThreshold() const

Get the threshold used to determine when to truncate Eigenvalues.

ndarray::Array<double const, 1, 1> getSolution()

Return the vector solution to the least squares problem.

The returned array is owned by the LeastSquares object and may be modified in-place by future calls to LeastSquares member functions, so it’s best to promptly copy the result elsewhere.

If you want an Eigen object instead, just use ndarray::asEigenMatrix(getSolution()).

The solution is cached the first time this member function is called, and will be reused unless the matrices are reset or the threshold is changed.

ndarray::Array<double const, 2, 2> getCovariance()

Return the covariance matrix of the least squares problem.

The returned array is owned by the LeastSquares object and may be modified in-place by future calls to LeastSquares member functions, so it’s best to promptly copy the result elsewhere.

If you want an Eigen object instead, just use ndarray::asEigenMatrix(getCovariance()).

The covariance is cached the first time this member function is called, and will be reused unless the matrices are reset or the threshold is changed.

ndarray::Array<double const, 2, 2> getFisherMatrix()

Return the Fisher matrix (inverse of the covariance) of the parameters.

Note that the Fisher matrix is exactly the same as the matrix on the lhs of the normal equations.

The returned array is owned by the LeastSquares object and may be modified in-place by future calls to LeastSquares member functions, so it’s best to promptly copy the result elsewhere.

If you want an Eigen object instead, just use ndarray::asEigenMatrix(getFisherMatrix()).

ndarray::Array<double const, 1, 1> getDiagnostic(Factorization factorization)

Return a factorization-dependent vector that can be used to characterize the stability of the solution.

The returned array’s size is always equal to getDimension(). When getRank() is less than getDimension(), some elements of the array were considered effectively zero (see setThreshold).

For the NORMAL_EIGENSYSTEM method, this is the vector of Eigenvalues of the Fisher matrix, including those rejected as being below the threshold, sorted in descending order (all values are nonnegative). This is the square of the singular values of the design matrix, and is only available when the factorization is either NORMAL_EIGENSYSTEM or DIRECT_SVD.

For the DIRECT_SVD method, this is the vector of singular values of the design matrix, including those rejected as being below the threshold, sorted in descending order (all values are nonnegative). This is the square root of the Eigenvalues of the Fisher matrix, and is only available when the factorization is either NORMAL_EIGENSYSTEM or DIRECT_SVD.

For the NORMAL_CHOLESKY method, this is \(D\) in the pivoted Cholesky factorization \(P L D L^T P^T\) of the Fisher matrix. This does not provide a reliable way to test the stability of the problem, but it does provide a way to compute the determinant of the Fisher matrix. It is only available when the factorization is NORMAL_CHOLESKY.

int getDimension() const

Return the number of parameters.

int getRank() const

Return the rank of the problem (number of nonzero Eigenvalues).

The returned value is always the same as getDimension() when the factorization is NORMAL_CHOLESKY (which may be incorrect, because a Cholesky decomposition is not rank-revealing).

Factorization getFactorization() const

Retun the type of factorization used by the solver.

LeastSquares(Factorization factorization, int dimension)

Construct a least-squares object for the given factorization and dimensionality.

One of the set* member functions must be called before any other operations can be performed on a LeastSquares object initialized this way.

LeastSquares(LeastSquares const&)
LeastSquares(LeastSquares&&)
LeastSquares &operator=(LeastSquares const&)
LeastSquares &operator=(LeastSquares&&)
~LeastSquares()

Public Static Functions

template<typename T1, typename T2, int C1, int C2>
static LeastSquares fromDesignMatrix(ndarray::Array<T1, 2, C1> const &design, ndarray::Array<T2, 1, C2> const &data, Factorization factorization = NORMAL_EIGENSYSTEM)

Initialize from the design matrix and data vector given as ndarrays.

template<typename D1, typename D2>
static LeastSquares fromDesignMatrix(Eigen::MatrixBase<D1> const &design, Eigen::MatrixBase<D2> const &data, Factorization factorization = NORMAL_EIGENSYSTEM)

Initialize from the design matrix and data vector given as an Eigen objects.

template<typename T1, typename T2, int C1, int C2>
static LeastSquares fromNormalEquations(ndarray::Array<T1, 2, C1> const &fisher, ndarray::Array<T2, 1, C2> const &rhs, Factorization factorization = NORMAL_EIGENSYSTEM)

Initialize from the terms in the normal equations, given as ndarrays.

template<typename D1, typename D2>
static LeastSquares fromNormalEquations(Eigen::MatrixBase<D1> const &fisher, Eigen::MatrixBase<D2> const &rhs, Factorization factorization = NORMAL_EIGENSYSTEM)

Initialize from the terms in the normal equations, given as Eigen objects.

Private Functions

Eigen::MatrixXd &_getDesignMatrix()
Eigen::VectorXd &_getDataVector()
Eigen::MatrixXd &_getFisherMatrix()
Eigen::VectorXd &_getRhsVector()
void _factor(bool haveNormalEquations)

Private Members

std::shared_ptr<Impl> _impl