Next: Nonlinear Least-Squares Initialization, Previous: Nonlinear Least-Squares Weighted Overview, Up: Nonlinear Least-Squares Fitting [Index]
The user can tune nearly all aspects of the iteration at allocation
time. For the gsl_multifit_nlinear
interface, the user may
modify the gsl_multifit_nlinear_parameters
structure, which is
defined as follows:
typedef struct { const gsl_multifit_nlinear_trs *trs; /* trust region subproblem method */ const gsl_multifit_nlinear_scale *scale; /* scaling method */ const gsl_multifit_nlinear_solver *solver; /* solver method */ gsl_multifit_nlinear_fdtype fdtype; /* finite difference method */ double factor_up; /* factor for increasing trust radius */ double factor_down; /* factor for decreasing trust radius */ double avmax; /* max allowed |a|/|v| */ double h_df; /* step size for finite difference Jacobian */ double h_fvv; /* step size for finite difference fvv */ } gsl_multifit_nlinear_parameters;
For the gsl_multilarge_nlinear
interface, the user may
modify the gsl_multilarge_nlinear_parameters
structure, which is
defined as follows:
typedef struct { const gsl_multilarge_nlinear_trs *trs; /* trust region subproblem method */ const gsl_multilarge_nlinear_scale *scale; /* scaling method */ const gsl_multilarge_nlinear_solver *solver; /* solver method */ gsl_multilarge_nlinear_fdtype fdtype; /* finite difference method */ double factor_up; /* factor for increasing trust radius */ double factor_down; /* factor for decreasing trust radius */ double avmax; /* max allowed |a|/|v| */ double h_df; /* step size for finite difference Jacobian */ double h_fvv; /* step size for finite difference fvv */ size_t max_iter; /* maximum iterations for trs method */ double tol; /* tolerance for solving trs */ } gsl_multilarge_nlinear_parameters;
Each of these parameters is discussed in further detail below.
This parameter determines the method used to solve the trust region subproblem, and may be selected from the following choices,
This selects the Levenberg-Marquardt algorithm.
This selects the Levenberg-Marquardt algorithm with geodesic acceleration.
This selects the dogleg algorithm.
This selects the double dogleg algorithm.
This selects the 2D subspace algorithm.
This selects the Steihaug-Toint conjugate gradient algorithm. This method is available only for large systems.
This parameter determines the diagonal scaling matrix D and may be selected from the following choices,
This damping strategy was suggested by Moré, and corresponds to D^T D = max(diag(J^T J)), in other words the maximum elements of diag(J^T J) encountered thus far in the iteration. This choice of D makes the problem scale-invariant, so that if the model parameters x_i are each scaled by an arbitrary constant, \tilde{x}_i = a_i x_i, then the sequence of iterates produced by the algorithm would be unchanged. This method can work very well in cases where the model parameters have widely different scales (ie: if some parameters are measured in nanometers, while others are measured in degrees Kelvin). This strategy has been proven effective on a large class of problems and so it is the library default, but it may not be the best choice for all problems.
This damping strategy was originally suggested by Levenberg, and corresponds to D^T D = I. This method has also proven effective on a large class of problems, but is not scale-invariant. However, some authors (e.g. Transtrum and Sethna 2012) argue that this choice is better for problems which are susceptible to parameter evaporation (ie: parameters go to infinity)
This damping strategy was suggested by Marquardt, and corresponds to D^T D = diag(J^T J). This method is scale-invariant, but it is generally considered inferior to both the Levenberg and Moré strategies, though may work well on certain classes of problems.
Solving the trust region subproblem on each iteration almost always requires the solution of the following linear least squares system
[J; sqrt(mu) D] \delta = - [f; 0]
The solver parameter determines how the system is solved and can be selected from the following choices:
This method solves the system using a rank revealing QR decomposition of the Jacobian J. This method will produce reliable solutions in cases where the Jacobian is rank deficient or near-singular but does require about twice as many operations as the Cholesky method discussed below.
This method solves the alternate normal equations problem
( J^T J + \mu D^T D ) \delta = -J^T f
by using a Cholesky decomposition of the matrix J^T J + \mu D^T D. This method is faster than the QR approach, however it is susceptible to numerical instabilities if the Jacobian matrix is rank deficient or near-singular. In these cases, an attempt is made to reduce the condition number of the matrix using Jacobi preconditioning, but for highly ill-conditioned problems the QR approach is better. If it is known that the Jacobian matrix is well conditioned, this method is accurate and will perform faster than the QR approach.
This method solves the system using a singular value decomposition of the Jacobian J. This method will produce the most reliable solutions for ill-conditioned Jacobians but is also the slowest solver method.
This parameter specifies whether to use forward or centered differences when approximating the Jacobian. This is only used when an analytic Jacobian is not provided to the solver. This parameter may be set to one of the following choices.
This specifies a forward finite difference to approximate the Jacobian matrix. The Jacobian matrix will be calculated as
J_ij = 1 / \Delta_j ( f_i(x + \Delta_j e_j) - f_i(x) )
where \Delta_j = h |x_j| and e_j is the standard
jth Cartesian unit basis vector so that
x + \Delta_j e_j represents a small (forward) perturbation of
the jth parameter by an amount \Delta_j. The perturbation
\Delta_j is proportional to the current value |x_j| which
helps to calculate an accurate Jacobian when the various parameters have
different scale sizes. The value of h is specified by the h_df
parameter. The accuracy of this method is O(h), and evaluating this
matrix requires an additional p function evaluations.
This specifies a centered finite difference to approximate the Jacobian matrix. The Jacobian matrix will be calculated as
J_ij = 1 / \Delta_j ( f_i(x + 1/2 \Delta_j e_j) - f_i(x - 1/2 \Delta_j e_j) )
See above for a description of \Delta_j. The accuracy of this method is O(h^2), but evaluating this matrix requires an additional 2p function evaluations.
When a step is accepted, the trust region radius will be increased by this factor. The default value is 3.
When a step is rejected, the trust region radius will be decreased by this factor. The default value is 2.
When using geodesic acceleration to solve a nonlinear least squares problem, an important parameter to monitor is the ratio of the acceleration term to the velocity term,
|a| / |v|
If this ratio is small, it means the acceleration correction is contributing very little to the step. This could be because the problem is not “nonlinear” enough to benefit from the acceleration. If the ratio is large (> 1) it means that the acceleration is larger than the velocity, which shouldn’t happen since the step represents a truncated series and so the second order term a should be smaller than the first order term v to guarantee convergence. Therefore any steps with a ratio larger than the parameter avmax are rejected. avmax is set to 0.75 by default. For problems which experience difficulty converging, this threshold could be lowered.
This parameter specifies the step size for approximating the
Jacobian matrix with finite differences. It is set to
\sqrt{\epsilon} by default, where \epsilon
is GSL_DBL_EPSILON
.
When using geodesic acceleration, the user must either supply a function to calculate f_{vv}(x) or the library can estimate this second directional derivative using a finite difference method. When using finite differences, the library must calculate f(x + h v) where h represents a small step in the velocity direction. The parameter h_fvv defines this step size and is set to 0.02 by default.