3 /** \eigenManualPage LeastSquares Solving linear least squares systems
5 This page describes how to solve linear least squares systems using %Eigen. An overdetermined system
6 of equations, say \a Ax = \a b, has no solutions. In this case, it makes sense to search for the
7 vector \a x which is closest to being a solution, in the sense that the difference \a Ax - \a b is
8 as small as possible. This \a x is called the least square solution (if the Euclidean norm is used).
10 The three methods discussed on this page are the SVD decomposition, the QR decomposition and normal
11 equations. Of these, the SVD decomposition is generally the most accurate but the slowest, normal
12 equations is the fastest but least accurate, and the QR decomposition is in between.
17 \section LeastSquaresSVD Using the SVD decomposition
19 The \link JacobiSVD::solve() solve() \endlink method in the JacobiSVD class can be directly used to
20 solve linear squares systems. It is not enough to compute only the singular values (the default for
21 this class); you also need the singular vectors but the thin SVD decomposition suffices for
22 computing least squares solutions:
24 <table class="example">
25 <tr><th>Example:</th><th>Output:</th></tr>
27 <td>\include TutorialLinAlgSVDSolve.cpp </td>
28 <td>\verbinclude TutorialLinAlgSVDSolve.out </td>
32 This is example from the page \link TutorialLinearAlgebra Linear algebra and decompositions \endlink.
35 \section LeastSquaresQR Using the QR decomposition
37 The solve() method in QR decomposition classes also computes the least squares solution. There are
38 three QR decomposition classes: HouseholderQR (no pivoting, so fast but unstable),
39 ColPivHouseholderQR (column pivoting, thus a bit slower but more accurate) and FullPivHouseholderQR
40 (full pivoting, so slowest and most stable). Here is an example with column pivoting:
42 <table class="example">
43 <tr><th>Example:</th><th>Output:</th></tr>
45 <td>\include LeastSquaresQR.cpp </td>
46 <td>\verbinclude LeastSquaresQR.out </td>
51 \section LeastSquaresNormalEquations Using normal equations
53 Finding the least squares solution of \a Ax = \a b is equivalent to solving the normal equation
54 <i>A</i><sup>T</sup><i>Ax</i> = <i>A</i><sup>T</sup><i>b</i>. This leads to the following code
56 <table class="example">
57 <tr><th>Example:</th><th>Output:</th></tr>
59 <td>\include LeastSquaresNormalEquations.cpp </td>
60 <td>\verbinclude LeastSquaresNormalEquations.out </td>
64 If the matrix \a A is ill-conditioned, then this is not a good method, because the condition number
65 of <i>A</i><sup>T</sup><i>A</i> is the square of the condition number of \a A. This means that you
66 lose twice as many digits using normal equation than if you use the other methods.