Class lsst::afw::math::LeastSquares

class LeastSquares

Solver for linear least-squares problems.

Linear least-squares problems are defined as finding the vector \(x\) that minimizes \(\left|A x - b\right|_2\), with the number of rows of \(A\) generally greater than the number of columns. We call \(A\) the design matrix, \(b\) the data vector, and \(x\) the solution vector. When the rank of \(A\) is equal to the number of columns, we can obtain using the solution using the normal equations:

\[ A^T A x = A^T b \]
If \(A\) is not full-rank, the problem is underconstrained, and we usually wish to solve the minimum-norm least-squares problem, which also minimizes \(|x|_2\). This can be done by computing a pseudo-inverse of \(A\) using an SVD or complete orthogonal factorization of \(A\), or by performing an Eigen decomposition of \(A^T A\).

This class can be constructed from the design matrix and data vector, or from the two terms in the normal equations (below, we call the matrix \(A^TA\) the Fisher matrix, and the vector \(A^T b\) simply the “right-hand side” (RHS) vector). If initialized with the design matrix and data vector, we can still use the normal equations to solve it. The solution via the normal equations is more susceptible to round-off error, but it is also faster, and if the normal equation terms can be computed directly it can be significantly less expensive in terms of memory. The Fisher matrix is a symmetric matrix, and it should be exactly symmetric when provided as input, because which triangle will be used is an implementation detail that is subject to change and may depend on the storage order.

The solver always operates in double precision, and returns all results in double precision. However, it can be initialized from single precision inputs. It isn’t usually a good idea to construct the normal equations in single precision, however, even when the data are single precision.