Next: , Previous: Root Bracketing Algorithms, Up: One dimensional Root-Finding   [Index]


34.9 Root Finding Algorithms using Derivatives

The root polishing algorithms described in this section require an initial guess for the location of the root. There is no absolute guarantee of convergence—the function must be suitable for this technique and the initial guess must be sufficiently close to the root for it to work. When these conditions are satisfied then convergence is quadratic.

These algorithms make use of both the function and its derivative.

Derivative Solver: gsl_root_fdfsolver_newton

Newton’s Method is the standard root-polishing algorithm. The algorithm begins with an initial guess for the location of the root. On each iteration, a line tangent to the function f is drawn at that position. The point where this line crosses the x-axis becomes the new guess. The iteration is defined by the following sequence,

x_{i+1} = x_i - f(x_i)/f'(x_i)

Newton’s method converges quadratically for single roots, and linearly for multiple roots.

Derivative Solver: gsl_root_fdfsolver_secant

The secant method is a simplified version of Newton’s method which does not require the computation of the derivative on every step.

On its first iteration the algorithm begins with Newton’s method, using the derivative to compute a first step,

x_1 = x_0 - f(x_0)/f'(x_0)

Subsequent iterations avoid the evaluation of the derivative by replacing it with a numerical estimate, the slope of the line through the previous two points,

x_{i+1} = x_i f(x_i) / f'_{est} where
 f'_{est} = (f(x_i) - f(x_{i-1})/(x_i - x_{i-1})

When the derivative does not change significantly in the vicinity of the root the secant method gives a useful saving. Asymptotically the secant method is faster than Newton’s method whenever the cost of evaluating the derivative is more than 0.44 times the cost of evaluating the function itself. As with all methods of computing a numerical derivative the estimate can suffer from cancellation errors if the separation of the points becomes too small.

On single roots, the method has a convergence of order (1 + \sqrt 5)/2 (approximately 1.62). It converges linearly for multiple roots.

Derivative Solver: gsl_root_fdfsolver_steffenson

The Steffenson Method14 provides the fastest convergence of all the routines. It combines the basic Newton algorithm with an Aitken “delta-squared” acceleration. If the Newton iterates are x_i then the acceleration procedure generates a new sequence R_i,

R_i = x_i - (x_{i+1} - x_i)^2 / (x_{i+2} - 2 x_{i+1} + x_{i})

which converges faster than the original sequence under reasonable conditions. The new sequence requires three terms before it can produce its first value so the method returns accelerated values on the second and subsequent iterations. On the first iteration it returns the ordinary Newton estimate. The Newton iterate is also returned if the denominator of the acceleration term ever becomes zero.

As with all acceleration procedures this method can become unstable if the function is not well-behaved.


Footnotes

(14)

J.F. Steffensen (1873–1961). The spelling used in the name of the function is slightly incorrect, but has been preserved to avoid incompatibility.


Next: , Previous: Root Bracketing Algorithms, Up: One dimensional Root-Finding   [Index]