From a first guess of the parameters , a sequence is generated and is intended to converge to a local minimum of . At each iteration, one computes
where is a certain descent direction and is a real coefficient which is chosen such that is approximately minimum. The direction is ideally the solution of the Newton equation
which can also be rewritten
Neglecting the second derivatives matrix , we obtain the ``normal equations'' and the Gauss--Newton direction
This so--called Gauss--Newton method is intended for problems where is small. If the Jacobian is singular or near singular or if is very large (the so--called large residuals problem), the Gauss--Newton equation is not a good approximation of the normal equations and the convergence is not guaranteed.
The algorithm implemented here is a modification of that Gauss--Newton method, that allows convergence even for rank deficient Jacobians or for large residuals. The Gauss--Newton direction is computed in , the invariant space corresponding to the non--null eigenvalues. A correction is taken in , the orthogonal of , according to the second derivatives if the decrease of the objective function at the last iteration is considered too small. The Hessian matrix is estimated using finite differences of the gradient.
This method requires the availability of the derivatives and as
the number of gradient evaluations is almost p at each iteration, it
is recommended for problems with a small number of parameters,
let us say