Next: Robust linear regression, Previous: Multi-parameter regression, Up: Least-Squares Fitting [Index]
Ordinary weighted least squares models seek a solution vector c which minimizes the residual
\chi^2 = || y - Xc ||_W^2
where y is the n-by-1 observation vector,
X is the n-by-p design matrix, c is
the p-by-1 solution vector,
W = diag(w_1,...,w_n) is the data weighting matrix,
and ||r||_W^2 = r^T W r.
In cases where the least squares matrix X is ill-conditioned,
small perturbations (ie: noise) in the observation vector could lead to
widely different solution vectors c.
One way of dealing with ill-conditioned matrices is to use a “truncated SVD”
in which small singular values, below some given tolerance, are discarded
from the solution. The truncated SVD method is available using the functions
gsl_multifit_linear_tsvd
and gsl_multifit_wlinear_tsvd
. Another way
to help solve ill-posed problems is to include a regularization term in the least squares
minimization
\chi^2 = || y - Xc ||_W^2 + \lambda^2 || L c ||^2
for a suitably chosen regularization parameter \lambda and matrix L. This type of regularization is known as Tikhonov, or ridge, regression. In some applications, L is chosen as the identity matrix, giving preference to solution vectors c with smaller norms. Including this regularization term leads to the explicit “normal equations” solution
c = ( X^T W X + \lambda^2 L^T L )^-1 X^T W y
which reduces to the ordinary least squares solution when L = 0. In practice, it is often advantageous to transform a regularized least squares system into the form
\chi^2 = || y~ - X~ c~ ||^2 + \lambda^2 || c~ ||^2
This is known as the Tikhonov “standard form” and has the normal equations solution \tilde{c} = \left( \tilde{X}^T \tilde{X} + \lambda^2 I \right)^{-1} \tilde{X}^T \tilde{y}. For an m-by-p matrix L which is full rank and has m >= p (ie: L is square or has more rows than columns), we can calculate the “thin” QR decomposition of L, and note that ||L c|| = ||R c|| since the Q factor will not change the norm. Since R is p-by-p, we can then use the transformation
X~ = sqrt(W) X R^-1 y~ = sqrt(W) y c~ = R c
to achieve the standard form. For a rectangular matrix L with m < p,
a more sophisticated approach is needed (see Hansen 1998, chapter 2.3).
In practice, the normal equations solution above is not desirable due to
numerical instabilities, and so the system is solved using the
singular value decomposition of the matrix \tilde{X}.
The matrix L is often chosen as the identity matrix, or as a first
or second finite difference operator, to ensure a smoothly varying
coefficient vector c, or as a diagonal matrix to selectively damp
each model parameter differently. If L \ne I, the user must first
convert the least squares problem to standard form using
gsl_multifit_linear_stdform1
or gsl_multifit_linear_stdform2
,
solve the system, and then backtransform the solution vector to recover
the solution of the original problem (see
gsl_multifit_linear_genform1
and gsl_multifit_linear_genform2
).
In many regularization problems, care must be taken when choosing the regularization parameter \lambda. Since both the residual norm ||y - X c|| and solution norm ||L c|| are being minimized, the parameter \lambda represents a tradeoff between minimizing either the residuals or the solution vector. A common tool for visualizing the comprimise between the minimization of these two quantities is known as the L-curve. The L-curve is a log-log plot of the residual norm ||y - X c|| on the horizontal axis and the solution norm ||L c|| on the vertical axis. This curve nearly always as an L shaped appearance, with a distinct corner separating the horizontal and vertical sections of the curve. The regularization parameter corresponding to this corner is often chosen as the optimal value. GSL provides routines to calculate the L-curve for all relevant regularization parameters as well as locating the corner.
Another method of choosing the regularization parameter is known as Generalized Cross Validation (GCV). This method is based on the idea that if an arbitrary element y_i is left out of the right hand side, the resulting regularized solution should predict this element accurately. This leads to choosing the parameter \lambda which minimizes the GCV function
G(\lambda) = (||y - X c_{\lambda}||^2) / Tr(I_n - X X^I)^2
where X_{\lambda}^I is the matrix which relates the solution c_{\lambda} to the right hand side y, ie: c_{\lambda} = X_{\lambda}^I y. GSL provides routines to compute the GCV curve and its minimum.
For most applications, the steps required to solve a regularized least squares problem are as follows:
These functions define a regularization matrix
L = diag(l_0,l_1,...,l_{p-1}).
The diagonal matrix element l_i is provided by the
ith element of the input vector L.
The n-by-p least squares matrix X and
vector y of length n are then
converted to standard form as described above and the parameters
(\tilde{X},\tilde{y}) are stored in Xs and ys
on output. Xs and ys have the same dimensions as
X and y. Optional data weights may be supplied in the
vector w of length n. In order to apply this transformation,
L^{-1} must exist and so none of the l_i
may be zero. After the standard form system has been solved,
use gsl_multifit_linear_genform1
to recover the original solution vector.
It is allowed to have X = Xs and y = ys for an in-place transform.
In order to perform a weighted regularized fit with L = I, the user may
call gsl_multifit_linear_applyW
to convert to standard form.
This function factors the m-by-p regularization matrix
L into a form needed for the later transformation to standard form. L
may have any number of rows m. If m \ge p the QR decomposition of
L is computed and stored in L on output. If m < p, the QR decomposition
of L^T is computed and stored in L on output. On output,
the Householder scalars are stored in the vector tau of size MIN(m,p).
These outputs will be used by gsl_multifit_linear_wstdform2
to complete the
transformation to standard form.
These functions convert the least squares system (X,y,W,L) to standard
form (\tilde{X},\tilde{y}) which are stored in Xs and ys
respectively. The m-by-p regularization matrix L is specified by the inputs
LQR and Ltau, which are outputs from gsl_multifit_linear_L_decomp
.
The dimensions of the standard form parameters (\tilde{X},\tilde{y})
depend on whether m is larger or less than p. For m \ge p,
Xs is n-by-p, ys is n-by-1, and M is
not used. For m < p, Xs is (n - p + m)-by-m,
ys is (n - p + m)-by-1, and M is additional n-by-p workspace,
which is required to recover the original solution vector after the system has been
solved (see gsl_multifit_linear_genform2
). Optional data weights may be supplied in the
vector w of length n, where W = diag(w).
This function computes the regularized best-fit parameters \tilde{c}
which minimize the cost function
\chi^2 = || \tilde{y} - \tilde{X} \tilde{c} ||^2 + \lambda^2 || \tilde{c} ||^2 which is
in standard form. The least squares system must therefore be converted
to standard form prior to calling this function.
The observation vector \tilde{y} is provided in ys and the matrix of
predictor variables \tilde{X} in Xs. The solution vector \tilde{c} is
returned in cs, which has length min(m,p). The SVD of Xs must be computed prior
to calling this function, using gsl_multifit_linear_svd
.
The regularization parameter \lambda is provided in lambda.
The residual norm || \tilde{y} - \tilde{X} \tilde{c} || = ||y - X c||_W is returned in rnorm.
The solution norm || \tilde{c} || = ||L c|| is returned in
snorm.
After a regularized system has been solved with L = diag(\l_0,\l_1,...,\l_{p-1}), this function backtransforms the standard form solution vector cs to recover the solution vector of the original problem c. The diagonal matrix elements l_i are provided in the vector L. It is allowed to have c = cs for an in-place transform.
After a regularized system has been solved with a general rectangular matrix L,
specified by (LQR,Ltau), this function backtransforms the standard form solution cs
to recover the solution vector of the original problem, which is stored in c,
of length p. The original least squares matrix and observation vector are provided in
X and y respectively. M is the matrix computed by
gsl_multifit_linear_stdform2
. For weighted fits, the weight vector
w must also be supplied.
For weighted least squares systems with L = I, this function may be used to convert the system to standard form by applying the weight matrix W = diag(w) to the least squares matrix X and observation vector y. On output, WX is equal to W^{1/2} X and Wy is equal to W^{1/2} y. It is allowed for WX = X and Wy = y for an in-place transform.
This function computes the L-curve for a least squares system
using the right hand side vector y and the SVD decomposition
of the least squares matrix X, which must be provided
to gsl_multifit_linear_svd
prior to
calling this function. The output vectors reg_param,
rho, and eta must all be the same size, and will
contain the regularization parameters \lambda_i, residual norms
||y - X c_i||, and solution norms || L c_i ||
which compose the L-curve, where c_i is the regularized
solution vector corresponding to \lambda_i.
The user may determine the number of points on the L-curve by
adjusting the size of these input arrays. The regularization
parameters \lambda_i are estimated from the singular values
of X, and chosen to represent the most relevant portion of
the L-curve.
This function attempts to locate the corner of the L-curve
(||y - X c||, ||L c||) defined by the rho and eta
input arrays respectively. The corner is defined as the point of maximum
curvature of the L-curve in log-log scale. The rho and eta
arrays can be outputs of gsl_multifit_linear_lcurve
. The
algorithm used simply fits a circle to 3 consecutive points on the L-curve
and uses the circle’s radius to determine the curvature at
the middle point. Therefore, the input array sizes must be
\ge 3. With more points provided for the L-curve, a better
estimate of the curvature can be obtained. The array index
corresponding to maximum curvature (ie: the corner) is returned
in idx. If the input arrays contain colinear points,
this function could fail and return GSL_EINVAL
.
This function attempts to locate the corner of an alternate L-curve
(\lambda^2, ||L c||^2) studied by Rezghi and Hosseini, 2009.
This alternate L-curve can provide better estimates of the
regularization parameter for smooth solution vectors. The regularization
parameters \lambda and solution norms ||L c|| are provided
in the reg_param and eta input arrays respectively. The
corner is defined as the point of maximum curvature of this
alternate L-curve in linear scale. The reg_param and eta
arrays can be outputs of gsl_multifit_linear_lcurve
. The
algorithm used simply fits a circle to 3 consecutive points on the L-curve
and uses the circle’s radius to determine the curvature at
the middle point. Therefore, the input array sizes must be
\ge 3. With more points provided for the L-curve, a better
estimate of the curvature can be obtained. The array index
corresponding to maximum curvature (ie: the corner) is returned
in idx. If the input arrays contain colinear points,
this function could fail and return GSL_EINVAL
.
This function performs some initialization in preparation for computing the GCV curve and its minimum. The right hand side vector is provided in y. On output, reg_param is set to a vector of regularization parameters in decreasing order and may be of any size. The vector UTy of size p is set to U^T y. The parameter delta0 is needed for subsequent steps of the GCV calculation.
This funtion calculates the GCV curve G(\lambda) and stores it in
G on output, which must be the same size as reg_param. The
inputs reg_param, UTy and delta0 are computed in
gsl_multifit_linear_gcv_init
.
This function computes the value of the regularization parameter
which minimizes the GCV curve G(\lambda) and stores it in
lambda. The input G is calculated by
gsl_multifit_linear_gcv_curve
and the inputs
reg_param, UTy and delta0 are computed by
gsl_multifit_linear_gcv_init
.
This function returns the value of the GCV curve G(\lambda) corresponding to the input lambda.
This function combines the steps gcv_init
, gcv_curve
,
and gcv_min
defined above into a single function. The input
y is the right hand side vector. On output, reg_param and
G, which must be the same size, are set to vectors of
\lambda and G(\lambda) values respectively. The
output lambda is set to the optimal value of \lambda
which minimizes the GCV curve. The minimum value of the GCV curve is
returned in G_lambda.
This function computes the discrete approximation to the derivative operator L_k of order k on a regular grid of p points and stores it in L. The dimensions of L are (p-k)-by-p.
This function computes the regularization matrix L corresponding to the weighted Sobolov norm ||L c||^2 = \sum_k \alpha_k^2 ||L_k c||^2 where L_k approximates the derivative operator of order k. This regularization norm can be useful in applications where it is necessary to smooth several derivatives of the solution. p is the number of model parameters, kmax is the highest derivative to include in the summation above, and alpha is the vector of weights of size kmax + 1, where alpha[k] = \alpha_k is the weight assigned to the derivative of order k. The output matrix L is size p-by-p and upper triangular.
This function returns the reciprocal condition number of the least squares matrix X,
defined as the ratio of the smallest and largest singular values, rcond = \sigma_{min}/\sigma_{max}.
The routine gsl_multifit_linear_svd
must first be called to compute the SVD of X.
Next: Robust linear regression, Previous: Multi-parameter regression, Up: Least-Squares Fitting [Index]