1 \documentclass[a4paper,
10pt
]{article
}
3 \title{Numerical Jacobian evaluation
}
4 \author{Antonio, Fabio Di Narzo
}
9 Consider the map $F:
\Re^p
\mapsto \Re^p$:
11 F(
\mathbf{x
}) =
\left(
\begin{array
}{c
}
18 with $
\mathbf{x
} \in \Re^p$. Define $D_j F$ as the column vector:
20 D_j F =
\left(
\begin{array
}{c
}
28 with $D_j f_i$ indicating the derivative of $f_i$ with respect to the $j$-th component of $
\mathbf{x
}$.
29 We want to evaluate the Jacobian:
31 JF =
\left[ \begin{array
}{cccc
} D_1F & D_2F &
\ldots & D_pF
\end{array
} \right] =
32 \left(
\begin{array
}{cccc
}
33 D_1 f_1 & D_2 f_1 &
\ldots & D_p f_1 \\
34 D_1 f_2 & D_2 f_2 &
\ldots & D_p f_2 \\
36 D_1 f_p & D_2 f_p &
\ldots & D_p f_p \\
39 The computation of the matrix $JF$ reduces to the computation of the $p
\times p$ partial derivatives $D_j f_i$ in a
40 generic point $
\mathbf{x
}$. A numerical approximation of such derivative can be the following:
42 D_j f_i (
\mathbf{x
})
\simeq \frac{1}{h
} [f_i(
\mathbf{x
}^
{(j)
\star}) - f_i(
\mathbf{x
})
]
44 with $
\mathbf{x
}^
{(j)
\star} = (x_1, x_2,
\ldots, x_j+h,
\ldots, x_p)$ and $h
\simeq 0$
\footnote{
45 The increment $h$ should be chosen as the smallest positive value for which $x_j+h
\neq x_j$ on the computer.
46 It can be worth to define $h$ relative to each $x_j$, i.e. $h = x_j
\times h^
\star$
47 with $h^
\star \simeq 0$ fixed once for all $j$.
50 Note that the actual computation of these $p
\times p$ derivatives requires $p+
1$ evaluations of the map $F$.
51 This can be seen by using matrix notation:
53 D_jF(
\mathbf{x
})
\simeq \frac{1}{h
} [F(
\mathbf{x
}^
{(j)
\star}) - F(
\mathbf{x
})
]
55 So, one can compute $F(
\mathbf{x
})$ once (stays fixed over $j$) and then the $p$ vectors $F(
\mathbf{x
}^
{(j)
\star}),