6 @cindex solving a nonlinear equation
7 @cindex nonlinear equation, solutions of
9 This chapter describes routines for finding roots of arbitrary
10 one-dimensional functions. The library provides low level components
11 for a variety of iterative solvers and convergence tests. These can be
12 combined by the user to achieve the desired solution, with full access
13 to the intermediate steps of the iteration. Each class of methods uses
14 the same framework, so that you can switch between solvers at runtime
15 without needing to recompile your program. Each instance of a solver
16 keeps track of its own state, allowing the solvers to be used in
17 multi-threaded programs.
19 The header file @file{gsl_roots.h} contains prototypes for the root
20 finding functions and related declarations.
23 * Root Finding Overview::
24 * Root Finding Caveats::
25 * Initializing the Solver::
26 * Providing the function to solve::
27 * Search Bounds and Guesses::
28 * Root Finding Iteration::
29 * Search Stopping Parameters::
30 * Root Bracketing Algorithms::
31 * Root Finding Algorithms using Derivatives::
32 * Root Finding Examples::
33 * Root Finding References and Further Reading::
36 @node Root Finding Overview
38 @cindex root finding, overview
40 One-dimensional root finding algorithms can be divided into two classes,
41 @dfn{root bracketing} and @dfn{root polishing}. Algorithms which proceed
42 by bracketing a root are guaranteed to converge. Bracketing algorithms
43 begin with a bounded region known to contain a root. The size of this
44 bounded region is reduced, iteratively, until it encloses the root to a
45 desired tolerance. This provides a rigorous error estimate for the
48 The technique of @dfn{root polishing} attempts to improve an initial
49 guess to the root. These algorithms converge only if started ``close
50 enough'' to a root, and sacrifice a rigorous error bound for speed. By
51 approximating the behavior of a function in the vicinity of a root they
52 attempt to find a higher order improvement of an initial guess. When the
53 behavior of the function is compatible with the algorithm and a good
54 initial guess is available a polishing algorithm can provide rapid
57 In GSL both types of algorithm are available in similar frameworks. The
58 user provides a high-level driver for the algorithms, and the library
59 provides the individual functions necessary for each of the steps.
60 There are three main phases of the iteration. The steps are,
64 initialize solver state, @var{s}, for algorithm @var{T}
67 update @var{s} using the iteration @var{T}
70 test @var{s} for convergence, and repeat iteration if necessary
74 The state for bracketing solvers is held in a @code{gsl_root_fsolver}
75 struct. The updating procedure uses only function evaluations (not
76 derivatives). The state for root polishing solvers is held in a
77 @code{gsl_root_fdfsolver} struct. The updates require both the function
78 and its derivative (hence the name @code{fdf}) to be supplied by the
81 @node Root Finding Caveats
83 @cindex root finding, caveats
85 Note that root finding functions can only search for one root at a time.
86 When there are several roots in the search area, the first root to be
87 found will be returned; however it is difficult to predict which of the
88 roots this will be. @emph{In most cases, no error will be reported if
89 you try to find a root in an area where there is more than one.}
91 Care must be taken when a function may have a multiple root (such as
92 @c{$f(x) = (x-x_0)^2$}
93 @math{f(x) = (x-x_0)^2} or
94 @c{$f(x) = (x-x_0)^3$}
95 @math{f(x) = (x-x_0)^3}).
96 It is not possible to use root-bracketing algorithms on
97 even-multiplicity roots. For these algorithms the initial interval must
98 contain a zero-crossing, where the function is negative at one end of
99 the interval and positive at the other end. Roots with even-multiplicity
100 do not cross zero, but only touch it instantaneously. Algorithms based
101 on root bracketing will still work for odd-multiplicity roots
102 (e.g. cubic, quintic, @dots{}).
103 Root polishing algorithms generally work with higher multiplicity roots,
104 but at a reduced rate of convergence. In these cases the @dfn{Steffenson
105 algorithm} can be used to accelerate the convergence of multiple roots.
107 While it is not absolutely required that @math{f} have a root within the
108 search region, numerical root finding functions should not be used
109 haphazardly to check for the @emph{existence} of roots. There are better
110 ways to do this. Because it is easy to create situations where numerical
111 root finders can fail, it is a bad idea to throw a root finder at a
112 function you do not know much about. In general it is best to examine
113 the function visually by plotting before searching for a root.
115 @node Initializing the Solver
116 @section Initializing the Solver
118 @deftypefun {gsl_root_fsolver *} gsl_root_fsolver_alloc (const gsl_root_fsolver_type * @var{T})
119 This function returns a pointer to a newly allocated instance of a
120 solver of type @var{T}. For example, the following code creates an
121 instance of a bisection solver,
124 const gsl_root_fsolver_type * T
125 = gsl_root_fsolver_bisection;
127 = gsl_root_fsolver_alloc (T);
130 If there is insufficient memory to create the solver then the function
131 returns a null pointer and the error handler is invoked with an error
132 code of @code{GSL_ENOMEM}.
135 @deftypefun {gsl_root_fdfsolver *} gsl_root_fdfsolver_alloc (const gsl_root_fdfsolver_type * @var{T})
136 This function returns a pointer to a newly allocated instance of a
137 derivative-based solver of type @var{T}. For example, the following
138 code creates an instance of a Newton-Raphson solver,
141 const gsl_root_fdfsolver_type * T
142 = gsl_root_fdfsolver_newton;
143 gsl_root_fdfsolver * s
144 = gsl_root_fdfsolver_alloc (T);
147 If there is insufficient memory to create the solver then the function
148 returns a null pointer and the error handler is invoked with an error
149 code of @code{GSL_ENOMEM}.
153 @deftypefun int gsl_root_fsolver_set (gsl_root_fsolver * @var{s}, gsl_function * @var{f}, double @var{x_lower}, double @var{x_upper})
154 This function initializes, or reinitializes, an existing solver @var{s}
155 to use the function @var{f} and the initial search interval
156 [@var{x_lower}, @var{x_upper}].
159 @deftypefun int gsl_root_fdfsolver_set (gsl_root_fdfsolver * @var{s}, gsl_function_fdf * @var{fdf}, double @var{root})
160 This function initializes, or reinitializes, an existing solver @var{s}
161 to use the function and derivative @var{fdf} and the initial guess
165 @deftypefun void gsl_root_fsolver_free (gsl_root_fsolver * @var{s})
166 @deftypefunx void gsl_root_fdfsolver_free (gsl_root_fdfsolver * @var{s})
167 These functions free all the memory associated with the solver @var{s}.
170 @deftypefun {const char *} gsl_root_fsolver_name (const gsl_root_fsolver * @var{s})
171 @deftypefunx {const char *} gsl_root_fdfsolver_name (const gsl_root_fdfsolver * @var{s})
172 These functions return a pointer to the name of the solver. For example,
175 printf ("s is a '%s' solver\n",
176 gsl_root_fsolver_name (s));
180 would print something like @code{s is a 'bisection' solver}.
183 @node Providing the function to solve
184 @section Providing the function to solve
185 @cindex root finding, providing a function to solve
187 You must provide a continuous function of one variable for the root
188 finders to operate on, and, sometimes, its first derivative. In order
189 to allow for general parameters the functions are defined by the
190 following data types:
192 @deftp {Data Type} gsl_function
193 This data type defines a general function with parameters.
196 @item double (* function) (double @var{x}, void * @var{params})
197 this function should return the value
198 @c{$f(x,\hbox{\it params})$}
199 @math{f(x,params)} for argument @var{x} and parameters @var{params}
202 a pointer to the parameters of the function
206 Here is an example for the general quadratic function,
210 f(x) = a x^2 + b x + c
217 f(x) = a x^2 + b x + c
222 with @math{a = 3}, @math{b = 2}, @math{c = 1}. The following code
223 defines a @code{gsl_function} @code{F} which you could pass to a root
227 struct my_f_params @{ double a; double b; double c; @};
230 my_f (double x, void * p) @{
231 struct my_f_params * params
232 = (struct my_f_params *)p;
233 double a = (params->a);
234 double b = (params->b);
235 double c = (params->c);
237 return (a * x + b) * x + c;
241 struct my_f_params params = @{ 3.0, 2.0, 1.0 @};
248 The function @math{f(x)} can be evaluated using the following macro,
251 #define GSL_FN_EVAL(F,x)
252 (*((F)->function))(x,(F)->params)
255 @deftp {Data Type} gsl_function_fdf
256 This data type defines a general function with parameters and its first
260 @item double (* f) (double @var{x}, void * @var{params})
261 this function should return the value of
262 @c{$f(x,\hbox{\it params})$}
263 @math{f(x,params)} for argument @var{x} and parameters @var{params}
265 @item double (* df) (double @var{x}, void * @var{params})
266 this function should return the value of the derivative of @var{f} with
268 @c{$f'(x,\hbox{\it params})$}
269 @math{f'(x,params)}, for argument @var{x} and parameters @var{params}
271 @item void (* fdf) (double @var{x}, void * @var{params}, double * @var{f}, double * @var{d}f)
272 this function should set the values of the function @var{f} to
273 @c{$f(x,\hbox{\it params})$}
275 and its derivative @var{df} to
276 @c{$f'(x,\hbox{\it params})$}
278 for argument @var{x} and parameters @var{params}. This function
279 provides an optimization of the separate functions for @math{f(x)} and
280 @math{f'(x)}---it is always faster to compute the function and its
281 derivative at the same time.
284 a pointer to the parameters of the function
288 Here is an example where
289 @c{$f(x) = \exp(2x)$}
290 @math{f(x) = 2\exp(2x)}:
294 my_f (double x, void * params)
300 my_df (double x, void * params)
302 return 2 * exp (2 * x);
306 my_fdf (double x, void * params,
307 double * f, double * df)
309 double t = exp (2 * x);
312 *df = 2 * t; /* uses existing value */
315 gsl_function_fdf FDF;
324 The function @math{f(x)} can be evaluated using the following macro,
327 #define GSL_FN_FDF_EVAL_F(FDF,x)
328 (*((FDF)->f))(x,(FDF)->params)
332 The derivative @math{f'(x)} can be evaluated using the following macro,
335 #define GSL_FN_FDF_EVAL_DF(FDF,x)
336 (*((FDF)->df))(x,(FDF)->params)
340 and both the function @math{y = f(x)} and its derivative @math{dy = f'(x)}
341 can be evaluated at the same time using the following macro,
344 #define GSL_FN_FDF_EVAL_F_DF(FDF,x,y,dy)
345 (*((FDF)->fdf))(x,(FDF)->params,(y),(dy))
349 The macro stores @math{f(x)} in its @var{y} argument and @math{f'(x)} in
350 its @var{dy} argument---both of these should be pointers to
353 @node Search Bounds and Guesses
354 @section Search Bounds and Guesses
355 @cindex root finding, search bounds
356 @cindex root finding, initial guess
358 You provide either search bounds or an initial guess; this section
359 explains how search bounds and guesses work and how function arguments
362 A guess is simply an @math{x} value which is iterated until it is within
363 the desired precision of a root. It takes the form of a @code{double}.
365 Search bounds are the endpoints of a interval which is iterated until
366 the length of the interval is smaller than the requested precision. The
367 interval is defined by two values, the lower limit and the upper limit.
368 Whether the endpoints are intended to be included in the interval or not
369 depends on the context in which the interval is used.
371 @node Root Finding Iteration
374 The following functions drive the iteration of each algorithm. Each
375 function performs one iteration to update the state of any solver of the
376 corresponding type. The same functions work for all solvers so that
377 different methods can be substituted at runtime without modifications to
380 @deftypefun int gsl_root_fsolver_iterate (gsl_root_fsolver * @var{s})
381 @deftypefunx int gsl_root_fdfsolver_iterate (gsl_root_fdfsolver * @var{s})
382 These functions perform a single iteration of the solver @var{s}. If the
383 iteration encounters an unexpected problem then an error code will be
388 the iteration encountered a singular point where the function or its
389 derivative evaluated to @code{Inf} or @code{NaN}.
392 the derivative of the function vanished at the iteration point,
393 preventing the algorithm from continuing without a division by zero.
397 The solver maintains a current best estimate of the root at all
398 times. The bracketing solvers also keep track of the current best
399 interval bounding the root. This information can be accessed with the
400 following auxiliary functions,
402 @deftypefun double gsl_root_fsolver_root (const gsl_root_fsolver * @var{s})
403 @deftypefunx double gsl_root_fdfsolver_root (const gsl_root_fdfsolver * @var{s})
404 These functions return the current estimate of the root for the solver @var{s}.
407 @deftypefun double gsl_root_fsolver_x_lower (const gsl_root_fsolver * @var{s})
408 @deftypefunx double gsl_root_fsolver_x_upper (const gsl_root_fsolver * @var{s})
409 These functions return the current bracketing interval for the solver @var{s}.
412 @node Search Stopping Parameters
413 @section Search Stopping Parameters
414 @cindex root finding, stopping parameters
416 A root finding procedure should stop when one of the following conditions is
421 A root has been found to within the user-specified precision.
424 A user-specified maximum number of iterations has been reached.
427 An error has occurred.
431 The handling of these conditions is under user control. The functions
432 below allow the user to test the precision of the current result in
433 several standard ways.
435 @deftypefun int gsl_root_test_interval (double @var{x_lower}, double @var{x_upper}, double @var{epsabs}, double @var{epsrel})
436 This function tests for the convergence of the interval [@var{x_lower},
437 @var{x_upper}] with absolute error @var{epsabs} and relative error
438 @var{epsrel}. The test returns @code{GSL_SUCCESS} if the following
439 condition is achieved,
443 |a - b| < \hbox{\it epsabs} + \hbox{\it epsrel\/}\, \min(|a|,|b|)
450 |a - b| < epsabs + epsrel min(|a|,|b|)
455 when the interval @math{x = [a,b]} does not include the origin. If the
456 interval includes the origin then @math{\min(|a|,|b|)} is replaced by
457 zero (which is the minimum value of @math{|x|} over the interval). This
458 ensures that the relative error is accurately estimated for roots close
461 This condition on the interval also implies that any estimate of the
462 root @math{r} in the interval satisfies the same condition with respect
463 to the true root @math{r^*},
467 |r - r^*| < \hbox{\it epsabs} + \hbox{\it epsrel\/}\, r^*
474 |r - r^*| < epsabs + epsrel r^*
479 assuming that the true root @math{r^*} is contained within the interval.
482 @deftypefun int gsl_root_test_delta (double @var{x1}, double @var{x0}, double @var{epsabs}, double @var{epsrel})
484 This function tests for the convergence of the sequence @dots{}, @var{x0},
485 @var{x1} with absolute error @var{epsabs} and relative error
486 @var{epsrel}. The test returns @code{GSL_SUCCESS} if the following
487 condition is achieved,
491 |x_1 - x_0| < \hbox{\it epsabs} + \hbox{\it epsrel\/}\, |x_1|
498 |x_1 - x_0| < epsabs + epsrel |x_1|
503 and returns @code{GSL_CONTINUE} otherwise.
507 @deftypefun int gsl_root_test_residual (double @var{f}, double @var{epsabs})
508 This function tests the residual value @var{f} against the absolute
509 error bound @var{epsabs}. The test returns @code{GSL_SUCCESS} if the
510 following condition is achieved,
514 |f| < \hbox{\it epsabs}
526 and returns @code{GSL_CONTINUE} otherwise. This criterion is suitable
527 for situations where the precise location of the root, @math{x}, is
528 unimportant provided a value can be found where the residual,
529 @math{|f(x)|}, is small enough.
532 @comment ============================================================
534 @node Root Bracketing Algorithms
535 @section Root Bracketing Algorithms
537 The root bracketing algorithms described in this section require an
538 initial interval which is guaranteed to contain a root---if @math{a}
539 and @math{b} are the endpoints of the interval then @math{f(a)} must
540 differ in sign from @math{f(b)}. This ensures that the function crosses
541 zero at least once in the interval. If a valid initial interval is used
542 then these algorithm cannot fail, provided the function is well-behaved.
544 Note that a bracketing algorithm cannot find roots of even degree, since
545 these do not cross the @math{x}-axis.
547 @deffn {Solver} gsl_root_fsolver_bisection
549 @cindex bisection algorithm for finding roots
550 @cindex root finding, bisection algorithm
552 The @dfn{bisection algorithm} is the simplest method of bracketing the
553 roots of a function. It is the slowest algorithm provided by
554 the library, with linear convergence.
556 On each iteration, the interval is bisected and the value of the
557 function at the midpoint is calculated. The sign of this value is used
558 to determine which half of the interval does not contain a root. That
559 half is discarded to give a new, smaller interval containing the
560 root. This procedure can be continued indefinitely until the interval is
563 At any time the current estimate of the root is taken as the midpoint of
566 @comment eps file "roots-bisection.eps"
569 @comment @center @image{roots-bisection,3.4in}
572 @comment Four iterations of bisection, where @math{a_n} is @math{n}th position of
573 @comment the beginning of the interval and @math{b_n} is the @math{n}th position
574 @comment of the end. The midpoint of each interval is also indicated.
575 @comment @end quotation
579 @comment ============================================================
581 @deffn {Solver} gsl_root_fsolver_falsepos
582 @cindex false position algorithm for finding roots
583 @cindex root finding, false position algorithm
585 The @dfn{false position algorithm} is a method of finding roots based on
586 linear interpolation. Its convergence is linear, but it is usually
587 faster than bisection.
589 On each iteration a line is drawn between the endpoints @math{(a,f(a))}
590 and @math{(b,f(b))} and the point where this line crosses the
591 @math{x}-axis taken as a ``midpoint''. The value of the function at
592 this point is calculated and its sign is used to determine which side of
593 the interval does not contain a root. That side is discarded to give a
594 new, smaller interval containing the root. This procedure can be
595 continued indefinitely until the interval is sufficiently small.
597 The best estimate of the root is taken from the linear interpolation of
598 the interval on the current iteration.
600 @comment eps file "roots-false-position.eps"
602 @comment @image{roots-false-position,4in}
604 @comment Several iterations of false position, where @math{a_n} is @math{n}th
605 @comment position of the beginning of the interval and @math{b_n} is the
606 @comment @math{n}th position of the end.
607 @comment @end quotation
611 @comment ============================================================
613 @deffn {Solver} gsl_root_fsolver_brent
614 @cindex Brent's method for finding roots
615 @cindex root finding, Brent's method
617 The @dfn{Brent-Dekker method} (referred to here as @dfn{Brent's method})
618 combines an interpolation strategy with the bisection algorithm. This
619 produces a fast algorithm which is still robust.
621 On each iteration Brent's method approximates the function using an
622 interpolating curve. On the first iteration this is a linear
623 interpolation of the two endpoints. For subsequent iterations the
624 algorithm uses an inverse quadratic fit to the last three points, for
625 higher accuracy. The intercept of the interpolating curve with the
626 @math{x}-axis is taken as a guess for the root. If it lies within the
627 bounds of the current interval then the interpolating point is accepted,
628 and used to generate a smaller interval. If the interpolating point is
629 not accepted then the algorithm falls back to an ordinary bisection
632 The best estimate of the root is taken from the most recent
633 interpolation or bisection.
636 @comment ============================================================
638 @node Root Finding Algorithms using Derivatives
639 @section Root Finding Algorithms using Derivatives
641 The root polishing algorithms described in this section require an
642 initial guess for the location of the root. There is no absolute
643 guarantee of convergence---the function must be suitable for this
644 technique and the initial guess must be sufficiently close to the root
645 for it to work. When these conditions are satisfied then convergence is
648 These algorithms make use of both the function and its derivative.
650 @deffn {Derivative Solver} gsl_root_fdfsolver_newton
651 @cindex Newton's method for finding roots
652 @cindex root finding, Newton's method
654 Newton's Method is the standard root-polishing algorithm. The algorithm
655 begins with an initial guess for the location of the root. On each
656 iteration, a line tangent to the function @math{f} is drawn at that
657 position. The point where this line crosses the @math{x}-axis becomes
658 the new guess. The iteration is defined by the following sequence,
662 x_{i+1} = x_i - {f(x_i) \over f'(x_i)}
669 x_@{i+1@} = x_i - f(x_i)/f'(x_i)
674 Newton's method converges quadratically for single roots, and linearly
677 @comment eps file "roots-newtons-method.eps"
680 @comment @center @image{roots-newtons-method,3.4in}
683 @comment Several iterations of Newton's Method, where @math{g_n} is the
684 @comment @math{n}th guess.
685 @comment @end quotation
689 @comment ============================================================
691 @deffn {Derivative Solver} gsl_root_fdfsolver_secant
692 @cindex secant method for finding roots
693 @cindex root finding, secant method
695 The @dfn{secant method} is a simplified version of Newton's method which does
696 not require the computation of the derivative on every step.
698 On its first iteration the algorithm begins with Newton's method, using
699 the derivative to compute a first step,
703 x_1 = x_0 - {f(x_0) \over f'(x_0)}
710 x_1 = x_0 - f(x_0)/f'(x_0)
715 Subsequent iterations avoid the evaluation of the derivative by
716 replacing it with a numerical estimate, the slope of the line through
717 the previous two points,
721 x_{i+1} = x_i - {f(x_i) \over f'_{est}}
723 f'_{est} = {f(x_{i}) - f(x_{i-1}) \over x_i - x_{i-1}}
730 x_@{i+1@} = x_i f(x_i) / f'_@{est@} where
731 f'_@{est@} = (f(x_i) - f(x_@{i-1@})/(x_i - x_@{i-1@})
736 When the derivative does not change significantly in the vicinity of the
737 root the secant method gives a useful saving. Asymptotically the secant
738 method is faster than Newton's method whenever the cost of evaluating
739 the derivative is more than 0.44 times the cost of evaluating the
740 function itself. As with all methods of computing a numerical
741 derivative the estimate can suffer from cancellation errors if the
742 separation of the points becomes too small.
744 On single roots, the method has a convergence of order @math{(1 + \sqrt
745 5)/2} (approximately @math{1.62}). It converges linearly for multiple
748 @comment eps file "roots-secant-method.eps"
753 @comment \centerline{\epsfxsize=5in\epsfbox{roots-secant-method.eps}}
756 @comment Several iterations of Secant Method, where @math{g_n} is the @math{n}th
758 @comment @end quotation
762 @comment ============================================================
764 @deffn {Derivative Solver} gsl_root_fdfsolver_steffenson
765 @cindex Steffenson's method for finding roots
766 @cindex root finding, Steffenson's method
768 The @dfn{Steffenson Method} provides the fastest convergence of all the
769 routines. It combines the basic Newton algorithm with an Aitken
770 ``delta-squared'' acceleration. If the Newton iterates are @math{x_i}
771 then the acceleration procedure generates a new sequence @math{R_i},
775 R_i = x_i - {(x_{i+1} - x_i)^2 \over (x_{i+2} - 2 x_{i+1} + x_i)}
782 R_i = x_i - (x_@{i+1@} - x_i)^2 / (x_@{i+2@} - 2 x_@{i+1@} + x_@{i@})
787 which converges faster than the original sequence under reasonable
788 conditions. The new sequence requires three terms before it can produce
789 its first value so the method returns accelerated values on the second
790 and subsequent iterations. On the first iteration it returns the
791 ordinary Newton estimate. The Newton iterate is also returned if the
792 denominator of the acceleration term ever becomes zero.
794 As with all acceleration procedures this method can become unstable if
795 the function is not well-behaved.
798 @node Root Finding Examples
801 For any root finding algorithm we need to prepare the function to be
802 solved. For this example we will use the general quadratic equation
803 described earlier. We first need a header file (@file{demo_fn.h}) to
804 define the function parameters,
807 @verbatiminclude examples/demo_fn.h
811 We place the function definitions in a separate file (@file{demo_fn.c}),
814 @verbatiminclude examples/demo_fn.c
818 The first program uses the function solver @code{gsl_root_fsolver_brent}
819 for Brent's method and the general quadratic defined above to solve the
836 with solution @math{x = \sqrt 5 = 2.236068...}
839 @verbatiminclude examples/roots.c
843 Here are the results of the iterations,
848 iter [ lower, upper] root err err(est)
849 1 [1.0000000, 5.0000000] 1.0000000 -1.2360680 4.0000000
850 2 [1.0000000, 3.0000000] 3.0000000 +0.7639320 2.0000000
851 3 [2.0000000, 3.0000000] 2.0000000 -0.2360680 1.0000000
852 4 [2.2000000, 3.0000000] 2.2000000 -0.0360680 0.8000000
853 5 [2.2000000, 2.2366300] 2.2366300 +0.0005621 0.0366300
855 6 [2.2360634, 2.2366300] 2.2360634 -0.0000046 0.0005666
859 If the program is modified to use the bisection solver instead of
860 Brent's method, by changing @code{gsl_root_fsolver_brent} to
861 @code{gsl_root_fsolver_bisection} the slower convergence of the
862 Bisection method can be observed,
866 using bisection method
867 iter [ lower, upper] root err err(est)
868 1 [0.0000000, 2.5000000] 1.2500000 -0.9860680 2.5000000
869 2 [1.2500000, 2.5000000] 1.8750000 -0.3610680 1.2500000
870 3 [1.8750000, 2.5000000] 2.1875000 -0.0485680 0.6250000
871 4 [2.1875000, 2.5000000] 2.3437500 +0.1076820 0.3125000
872 5 [2.1875000, 2.3437500] 2.2656250 +0.0295570 0.1562500
873 6 [2.1875000, 2.2656250] 2.2265625 -0.0095055 0.0781250
874 7 [2.2265625, 2.2656250] 2.2460938 +0.0100258 0.0390625
875 8 [2.2265625, 2.2460938] 2.2363281 +0.0002601 0.0195312
876 9 [2.2265625, 2.2363281] 2.2314453 -0.0046227 0.0097656
877 10 [2.2314453, 2.2363281] 2.2338867 -0.0021813 0.0048828
878 11 [2.2338867, 2.2363281] 2.2351074 -0.0009606 0.0024414
880 12 [2.2351074, 2.2363281] 2.2357178 -0.0003502 0.0012207
883 The next program solves the same function using a derivative solver
887 @verbatiminclude examples/rootnewt.c
891 Here are the results for Newton's method,
896 iter root err err(est)
897 1 3.0000000 +0.7639320 -2.0000000
898 2 2.3333333 +0.0972654 -0.6666667
899 3 2.2380952 +0.0020273 -0.0952381
901 4 2.2360689 +0.0000009 -0.0020263
905 Note that the error can be estimated more accurately by taking the
906 difference between the current iterate and next iterate rather than the
907 previous iterate. The other derivative solvers can be investigated by
908 changing @code{gsl_root_fdfsolver_newton} to
909 @code{gsl_root_fdfsolver_secant} or
910 @code{gsl_root_fdfsolver_steffenson}.
912 @node Root Finding References and Further Reading
913 @section References and Further Reading
915 For information on the Brent-Dekker algorithm see the following two
920 R. P. Brent, ``An algorithm with guaranteed convergence for finding a
921 zero of a function'', @cite{Computer Journal}, 14 (1971) 422--425
924 J. C. P. Bus and T. J. Dekker, ``Two Efficient Algorithms with Guaranteed
925 Convergence for Finding a Zero of a Function'', @cite{ACM Transactions of
926 Mathematical Software}, Vol.@: 1 No.@: 4 (1975) 330--345