3 /** \eigenManualPage TutorialLinearAlgebra Linear algebra and decompositions
5 This page explains how to solve linear systems, compute various decompositions such as LU,
6 QR, %SVD, eigendecompositions... After reading this page, don't miss our
7 \link TopicLinearAlgebraDecompositions catalogue \endlink of dense matrix decompositions.
11 \section TutorialLinAlgBasicSolve Basic linear solving
13 \b The \b problem: You have a system of equations, that you have written as a single matrix equation
15 Where \a A and \a b are matrices (\a b could be a vector, as a special case). You want to find a solution \a x.
17 \b The \b solution: You can choose between various decompositions, depending on what your matrix \a A looks like,
18 and depending on whether you favor speed or accuracy. However, let's start with an example that works in all cases,
19 and is a good compromise:
20 <table class="example">
21 <tr><th>Example:</th><th>Output:</th></tr>
23 <td>\include TutorialLinAlgExSolveColPivHouseholderQR.cpp </td>
24 <td>\verbinclude TutorialLinAlgExSolveColPivHouseholderQR.out </td>
28 In this example, the colPivHouseholderQr() method returns an object of class ColPivHouseholderQR. Since here the
29 matrix is of type Matrix3f, this line could have been replaced by:
31 ColPivHouseholderQR<Matrix3f> dec(A);
32 Vector3f x = dec.solve(b);
35 Here, ColPivHouseholderQR is a QR decomposition with column pivoting. It's a good compromise for this tutorial, as it
36 works for all matrices while being quite fast. Here is a table of some other decompositions that you can choose from,
37 depending on your matrix and the trade-off you want to make:
39 <table class="manual">
41 <th>Decomposition</th>
43 <th>Requirements on the matrix</th>
49 <td>partialPivLu()</td>
62 <td>HouseholderQR</td>
63 <td>householderQr()</td>
69 <td>ColPivHouseholderQR</td>
70 <td>colPivHouseholderQr()</td>
76 <td>FullPivHouseholderQR</td>
77 <td>fullPivHouseholderQr()</td>
85 <td>Positive definite</td>
92 <td>Positive or negative semidefinite</td>
98 All of these decompositions offer a solve() method that works as in the above example.
100 For example, if your matrix is positive definite, the above table says that a very good
101 choice is then the LDLT decomposition. Here's an example, also demonstrating that using a general
102 matrix (not a vector) as right hand side is possible.
104 <table class="example">
105 <tr><th>Example:</th><th>Output:</th></tr>
107 <td>\include TutorialLinAlgExSolveLDLT.cpp </td>
108 <td>\verbinclude TutorialLinAlgExSolveLDLT.out </td>
112 For a \ref TopicLinearAlgebraDecompositions "much more complete table" comparing all decompositions supported by Eigen (notice that Eigen
113 supports many other decompositions), see our special page on
114 \ref TopicLinearAlgebraDecompositions "this topic".
116 \section TutorialLinAlgSolutionExists Checking if a solution really exists
118 Only you know what error margin you want to allow for a solution to be considered valid.
119 So Eigen lets you do this computation for yourself, if you want to, as in this example:
121 <table class="example">
122 <tr><th>Example:</th><th>Output:</th></tr>
124 <td>\include TutorialLinAlgExComputeSolveError.cpp </td>
125 <td>\verbinclude TutorialLinAlgExComputeSolveError.out </td>
129 \section TutorialLinAlgEigensolving Computing eigenvalues and eigenvectors
131 You need an eigendecomposition here, see available such decompositions on \ref TopicLinearAlgebraDecompositions "this page".
132 Make sure to check if your matrix is self-adjoint, as is often the case in these problems. Here's an example using
133 SelfAdjointEigenSolver, it could easily be adapted to general matrices using EigenSolver or ComplexEigenSolver.
135 The computation of eigenvalues and eigenvectors does not necessarily converge, but such failure to converge is
136 very rare. The call to info() is to check for this possibility.
138 <table class="example">
139 <tr><th>Example:</th><th>Output:</th></tr>
141 <td>\include TutorialLinAlgSelfAdjointEigenSolver.cpp </td>
142 <td>\verbinclude TutorialLinAlgSelfAdjointEigenSolver.out </td>
146 \section TutorialLinAlgInverse Computing inverse and determinant
148 First of all, make sure that you really want this. While inverse and determinant are fundamental mathematical concepts,
149 in \em numerical linear algebra they are not as popular as in pure mathematics. Inverse computations are often
150 advantageously replaced by solve() operations, and the determinant is often \em not a good way of checking if a matrix
153 However, for \em very \em small matrices, the above is not true, and inverse and determinant can be very useful.
155 While certain decompositions, such as PartialPivLU and FullPivLU, offer inverse() and determinant() methods, you can also
156 call inverse() and determinant() directly on a matrix. If your matrix is of a very small fixed size (at most 4x4) this
157 allows Eigen to avoid performing a LU decomposition, and instead use formulas that are more efficient on such small matrices.
160 <table class="example">
161 <tr><th>Example:</th><th>Output:</th></tr>
163 <td>\include TutorialLinAlgInverseDeterminant.cpp </td>
164 <td>\verbinclude TutorialLinAlgInverseDeterminant.out </td>
168 \section TutorialLinAlgLeastsquares Least squares solving
170 The best way to do least squares solving is with a SVD decomposition. Eigen provides one as the JacobiSVD class, and its solve()
171 is doing least-squares solving.
174 <table class="example">
175 <tr><th>Example:</th><th>Output:</th></tr>
177 <td>\include TutorialLinAlgSVDSolve.cpp </td>
178 <td>\verbinclude TutorialLinAlgSVDSolve.out </td>
182 Another way, potentially faster but less reliable, is to use a LDLT decomposition
183 of the normal matrix. In any case, just read any reference text on least squares, and it will be very easy for you
184 to implement any linear least squares computation on top of Eigen.
186 \section TutorialLinAlgSeparateComputation Separating the computation from the construction
188 In the above examples, the decomposition was computed at the same time that the decomposition object was constructed.
189 There are however situations where you might want to separate these two things, for example if you don't know,
190 at the time of the construction, the matrix that you will want to decompose; or if you want to reuse an existing
191 decomposition object.
193 What makes this possible is that:
194 \li all decompositions have a default constructor,
195 \li all decompositions have a compute(matrix) method that does the computation, and that may be called again
196 on an already-computed decomposition, reinitializing it.
200 <table class="example">
201 <tr><th>Example:</th><th>Output:</th></tr>
203 <td>\include TutorialLinAlgComputeTwice.cpp </td>
204 <td>\verbinclude TutorialLinAlgComputeTwice.out </td>
208 Finally, you can tell the decomposition constructor to preallocate storage for decomposing matrices of a given size,
209 so that when you subsequently decompose such matrices, no dynamic memory allocation is performed (of course, if you
210 are using fixed-size matrices, no dynamic memory allocation happens at all). This is done by just
211 passing the size to the decomposition constructor, as in this example:
213 HouseholderQR<MatrixXf> qr(50,50);
214 MatrixXf A = MatrixXf::Random(50,50);
215 qr.compute(A); // no dynamic memory allocation
218 \section TutorialLinAlgRankRevealing Rank-revealing decompositions
220 Certain decompositions are rank-revealing, i.e. are able to compute the rank of a matrix. These are typically
221 also the decompositions that behave best in the face of a non-full-rank matrix (which in the square case means a
222 singular matrix). On \ref TopicLinearAlgebraDecompositions "this table" you can see for all our decompositions
223 whether they are rank-revealing or not.
225 Rank-revealing decompositions offer at least a rank() method. They can also offer convenience methods such as isInvertible(),
226 and some are also providing methods to compute the kernel (null-space) and image (column-space) of the matrix, as is the
229 <table class="example">
230 <tr><th>Example:</th><th>Output:</th></tr>
232 <td>\include TutorialLinAlgRankRevealing.cpp </td>
233 <td>\verbinclude TutorialLinAlgRankRevealing.out </td>
237 Of course, any rank computation depends on the choice of an arbitrary threshold, since practically no
238 floating-point matrix is \em exactly rank-deficient. Eigen picks a sensible default threshold, which depends
239 on the decomposition but is typically the diagonal size times machine epsilon. While this is the best default we
240 could pick, only you know what is the right threshold for your application. You can set this by calling setThreshold()
241 on your decomposition object before calling rank() or any other method that needs to use such a threshold.
242 The decomposition itself, i.e. the compute() method, is independent of the threshold. You don't need to recompute the
243 decomposition after you've changed the threshold.
245 <table class="example">
246 <tr><th>Example:</th><th>Output:</th></tr>
248 <td>\include TutorialLinAlgSetThreshold.cpp </td>
249 <td>\verbinclude TutorialLinAlgSetThreshold.out </td>