1 @cindex differential equations, initial value problems
2 @cindex initial value problems, differential equations
3 @cindex ordinary differential equations, initial value problem
4 @cindex ODEs, initial value problems
5 This chapter describes functions for solving ordinary differential
6 equation (ODE) initial value problems. The library provides a variety
7 of low-level methods, such as Runge-Kutta and Bulirsch-Stoer routines,
8 and higher-level components for adaptive step-size control. The
9 components can be combined by the user to achieve the desired solution,
10 with full access to any intermediate steps.
12 These functions are declared in the header file @file{gsl_odeiv.h}.
15 * Defining the ODE System::
16 * Stepping Functions::
17 * Adaptive Step-size Control::
19 * ODE Example programs::
20 * ODE References and Further Reading::
23 @node Defining the ODE System
24 @section Defining the ODE System
26 The routines solve the general @math{n}-dimensional first-order system,
30 {dy_i(t) \over dt} = f_i (t, y_1(t), \dots y_n(t))
37 dy_i(t)/dt = f_i(t, y_1(t), ..., y_n(t))
42 for @math{i = 1, \dots, n}. The stepping functions rely on the vector
43 of derivatives @math{f_i} and the Jacobian matrix,
44 @c{$J_{ij} = \partial f_i(t, y(t)) / \partial y_j$}
45 @math{J_@{ij@} = df_i(t,y(t)) / dy_j}.
46 A system of equations is defined using the @code{gsl_odeiv_system}
49 @deftp {Data Type} gsl_odeiv_system
50 This data type defines a general ODE system with arbitrary parameters.
53 @item int (* function) (double t, const double y[], double dydt[], void * params)
54 This function should store the vector elements
55 @c{$f_i(t,y,\hbox{\it params})$}
56 @math{f_i(t,y,params)} in the array @var{dydt},
57 for arguments (@var{t},@var{y}) and parameters @var{params}.
58 The function should return @code{GSL_SUCCESS} if the calculation
59 was completed successfully. Any other return value indicates
62 @item int (* jacobian) (double t, const double y[], double * dfdy, double dfdt[], void * params);
63 This function should store the vector of derivative elements
64 @c{$\partial f_i(t,y,params) / \partial t$}
65 @math{df_i(t,y,params)/dt} in the array @var{dfdt} and the
66 Jacobian matrix @c{$J_{ij}$}
67 @math{J_@{ij@}} in the array
68 @var{dfdy}, regarded as a row-ordered matrix @code{J(i,j) = dfdy[i * dimension + j]}
69 where @code{dimension} is the dimension of the system.
70 The function should return @code{GSL_SUCCESS} if the calculation
71 was completed successfully. Any other return value indicates
74 Some of the simpler solver algorithms do not make use of the Jacobian
75 matrix, so it is not always strictly necessary to provide it (the
76 @code{jacobian} element of the struct can be replaced by a null pointer
77 for those algorithms). However, it is useful to provide the Jacobian to allow
78 the solver algorithms to be interchanged---the best algorithms make use
81 @item size_t dimension;
82 This is the dimension of the system of equations.
85 This is a pointer to the arbitrary parameters of the system.
89 @node Stepping Functions
90 @section Stepping Functions
92 The lowest level components are the @dfn{stepping functions} which
93 advance a solution from time @math{t} to @math{t+h} for a fixed
94 step-size @math{h} and estimate the resulting local error.
96 @deftypefun {gsl_odeiv_step *} gsl_odeiv_step_alloc (const gsl_odeiv_step_type * @var{T}, size_t @var{dim})
97 This function returns a pointer to a newly allocated instance of a
98 stepping function of type @var{T} for a system of @var{dim} dimensions.
101 @deftypefun int gsl_odeiv_step_reset (gsl_odeiv_step * @var{s})
102 This function resets the stepping function @var{s}. It should be used
103 whenever the next use of @var{s} will not be a continuation of a
107 @deftypefun void gsl_odeiv_step_free (gsl_odeiv_step * @var{s})
108 This function frees all the memory associated with the stepping function
112 @deftypefun {const char *} gsl_odeiv_step_name (const gsl_odeiv_step * @var{s})
113 This function returns a pointer to the name of the stepping function.
117 printf ("step method is '%s'\n",
118 gsl_odeiv_step_name (s));
122 would print something like @code{step method is 'rkf45'}.
125 @deftypefun {unsigned int} gsl_odeiv_step_order (const gsl_odeiv_step * @var{s})
126 This function returns the order of the stepping function on the previous
127 step. This order can vary if the stepping function itself is adaptive.
130 @deftypefun int gsl_odeiv_step_apply (gsl_odeiv_step * @var{s}, double @var{t}, double @var{h}, double @var{y}[], double @var{yerr}[], const double @var{dydt_in}[], double @var{dydt_out}[], const gsl_odeiv_system * @var{dydt})
131 This function applies the stepping function @var{s} to the system of
132 equations defined by @var{dydt}, using the step size @var{h} to advance
133 the system from time @var{t} and state @var{y} to time @var{t}+@var{h}.
134 The new state of the system is stored in @var{y} on output, with an
135 estimate of the absolute error in each component stored in @var{yerr}.
136 If the argument @var{dydt_in} is not null it should point an array
137 containing the derivatives for the system at time @var{t} on input. This
138 is optional as the derivatives will be computed internally if they are
139 not provided, but allows the reuse of existing derivative information.
140 On output the new derivatives of the system at time @var{t}+@var{h} will
141 be stored in @var{dydt_out} if it is not null.
143 If the user-supplied functions defined in the system @var{dydt} return a
144 status other than @code{GSL_SUCCESS} the step will be aborted. In this
145 case, the elements of @var{y} will be restored to their pre-step values
146 and the error code from the user-supplied function will be returned.
147 The step-size @var{h} will be set to the step-size which caused the error.
148 If the function is called again with a smaller step-size, e.g. @math{@var{h}/10},
149 it should be possible to get closer to any singularity.
150 To distinguish between error codes from the user-supplied functions and
151 those from @code{gsl_odeiv_step_apply} itself, any user-defined return
152 values should be distinct from the standard GSL error codes.
155 The following algorithms are available,
157 @deffn {Step Type} gsl_odeiv_step_rk2
158 @cindex RK2, Runge-Kutta method
159 @cindex Runge-Kutta methods, ordinary differential equations
160 Embedded Runge-Kutta (2, 3) method.
163 @deffn {Step Type} gsl_odeiv_step_rk4
164 @cindex RK4, Runge-Kutta method
165 4th order (classical) Runge-Kutta. The error estimate is obtained by
166 halving the step-size. For more efficient estimate of the error, use the
167 Runge-Kutta-Fehlberg method described below.
170 @deffn {Step Type} gsl_odeiv_step_rkf45
171 @cindex Fehlberg method, differential equations
172 @cindex RKF45, Runge-Kutta-Fehlberg method
173 Embedded Runge-Kutta-Fehlberg (4, 5) method. This method is a good
174 general-purpose integrator.
177 @deffn {Step Type} gsl_odeiv_step_rkck
178 @cindex Runge-Kutta Cash-Karp method
179 @cindex Cash-Karp, Runge-Kutta method
180 Embedded Runge-Kutta Cash-Karp (4, 5) method.
183 @deffn {Step Type} gsl_odeiv_step_rk8pd
184 @cindex Runge-Kutta Prince-Dormand method
185 @cindex Prince-Dormand, Runge-Kutta method
186 Embedded Runge-Kutta Prince-Dormand (8,9) method.
189 @deffn {Step Type} gsl_odeiv_step_rk2imp
190 Implicit 2nd order Runge-Kutta at Gaussian points.
193 @deffn {Step Type} gsl_odeiv_step_rk4imp
194 Implicit 4th order Runge-Kutta at Gaussian points.
197 @deffn {Step Type} gsl_odeiv_step_bsimp
198 @cindex Bulirsch-Stoer method
199 @cindex Bader and Deuflhard, Bulirsch-Stoer method.
200 @cindex Deuflhard and Bader, Bulirsch-Stoer method.
201 Implicit Bulirsch-Stoer method of Bader and Deuflhard. This algorithm
202 requires the Jacobian.
205 @deffn {Step Type} gsl_odeiv_step_gear1
206 @cindex Gear method, differential equations
207 M=1 implicit Gear method.
210 @deffn {Step Type} gsl_odeiv_step_gear2
211 M=2 implicit Gear method.
214 @node Adaptive Step-size Control
215 @section Adaptive Step-size Control
216 @cindex Adaptive step-size control, differential equations
218 The control function examines the proposed change to the solution
219 produced by a stepping function and attempts to determine the optimal
220 step-size for a user-specified level of error.
222 @deftypefun {gsl_odeiv_control *} gsl_odeiv_control_standard_new (double @var{eps_abs}, double @var{eps_rel}, double @var{a_y}, double @var{a_dydt})
223 The standard control object is a four parameter heuristic based on
224 absolute and relative errors @var{eps_abs} and @var{eps_rel}, and
225 scaling factors @var{a_y} and @var{a_dydt} for the system state
226 @math{y(t)} and derivatives @math{y'(t)} respectively.
228 The step-size adjustment procedure for this method begins by computing
229 the desired error level @math{D_i} for each component,
233 D_i = \epsilon_{abs} + \epsilon_{rel} * (a_{y} |y_i| + a_{dydt} h |y'_i|)
240 D_i = eps_abs + eps_rel * (a_y |y_i| + a_dydt h |y'_i|)
245 and comparing it with the observed error @math{E_i = |yerr_i|}. If the
246 observed error @var{E} exceeds the desired error level @var{D} by more
247 than 10% for any component then the method reduces the step-size by an
252 h_{new} = h_{old} * S * (E/D)^{-1/q}
259 h_new = h_old * S * (E/D)^(-1/q)
264 where @math{q} is the consistency order of the method (e.g. @math{q=4} for
265 4(5) embedded RK), and @math{S} is a safety factor of 0.9. The ratio
266 @math{E/D} is taken to be the maximum of the ratios
269 If the observed error @math{E} is less than 50% of the desired error
270 level @var{D} for the maximum ratio @math{E_i/D_i} then the algorithm
271 takes the opportunity to increase the step-size to bring the error in
272 line with the desired level,
276 h_{new} = h_{old} * S * (E/D)^{-1/(q+1)}
283 h_new = h_old * S * (E/D)^(-1/(q+1))
288 This encompasses all the standard error scaling methods. To avoid
289 uncontrolled changes in the stepsize, the overall scaling factor is
290 limited to the range @math{1/5} to 5.
293 @deftypefun {gsl_odeiv_control *} gsl_odeiv_control_y_new (double @var{eps_abs}, double @var{eps_rel})
294 This function creates a new control object which will keep the local
295 error on each step within an absolute error of @var{eps_abs} and
296 relative error of @var{eps_rel} with respect to the solution @math{y_i(t)}.
297 This is equivalent to the standard control object with @var{a_y}=1 and
301 @deftypefun {gsl_odeiv_control *} gsl_odeiv_control_yp_new (double @var{eps_abs}, double @var{eps_rel})
302 This function creates a new control object which will keep the local
303 error on each step within an absolute error of @var{eps_abs} and
304 relative error of @var{eps_rel} with respect to the derivatives of the
305 solution @math{y'_i(t)}. This is equivalent to the standard control
306 object with @var{a_y}=0 and @var{a_dydt}=1.
310 @deftypefun {gsl_odeiv_control *} gsl_odeiv_control_scaled_new (double @var{eps_abs}, double @var{eps_rel}, double @var{a_y}, double @var{a_dydt}, const double @var{scale_abs}[], size_t @var{dim})
311 This function creates a new control object which uses the same algorithm
312 as @code{gsl_odeiv_control_standard_new} but with an absolute error
313 which is scaled for each component by the array @var{scale_abs}.
314 The formula for @math{D_i} for this control object is,
318 D_i = \epsilon_{abs} s_i + \epsilon_{rel} * (a_{y} |y_i| + a_{dydt} h |y'_i|)
325 D_i = eps_abs * s_i + eps_rel * (a_y |y_i| + a_dydt h |y'_i|)
330 where @math{s_i} is the @math{i}-th component of the array @var{scale_abs}.
331 The same error control heuristic is used by the Matlab @sc{ode} suite.
334 @deftypefun {gsl_odeiv_control *} gsl_odeiv_control_alloc (const gsl_odeiv_control_type * @var{T})
335 This function returns a pointer to a newly allocated instance of a
336 control function of type @var{T}. This function is only needed for
337 defining new types of control functions. For most purposes the standard
338 control functions described above should be sufficient.
341 @deftypefun int gsl_odeiv_control_init (gsl_odeiv_control * @var{c}, double @var{eps_abs}, double @var{eps_rel}, double @var{a_y}, double @var{a_dydt})
342 This function initializes the control function @var{c} with the
343 parameters @var{eps_abs} (absolute error), @var{eps_rel} (relative
344 error), @var{a_y} (scaling factor for y) and @var{a_dydt} (scaling
345 factor for derivatives).
348 @deftypefun void gsl_odeiv_control_free (gsl_odeiv_control * @var{c})
349 This function frees all the memory associated with the control function
353 @deftypefun int gsl_odeiv_control_hadjust (gsl_odeiv_control * @var{c}, gsl_odeiv_step * @var{s}, const double @var{y}[], const double @var{yerr}[], const double @var{dydt}[], double * @var{h})
354 This function adjusts the step-size @var{h} using the control function
355 @var{c}, and the current values of @var{y}, @var{yerr} and @var{dydt}.
356 The stepping function @var{step} is also needed to determine the order
357 of the method. If the error in the y-values @var{yerr} is found to be
358 too large then the step-size @var{h} is reduced and the function returns
359 @code{GSL_ODEIV_HADJ_DEC}. If the error is sufficiently small then
360 @var{h} may be increased and @code{GSL_ODEIV_HADJ_INC} is returned. The
361 function returns @code{GSL_ODEIV_HADJ_NIL} if the step-size is
362 unchanged. The goal of the function is to estimate the largest
363 step-size which satisfies the user-specified accuracy requirements for
367 @deftypefun {const char *} gsl_odeiv_control_name (const gsl_odeiv_control * @var{c})
368 This function returns a pointer to the name of the control function.
372 printf ("control method is '%s'\n",
373 gsl_odeiv_control_name (c));
377 would print something like @code{control method is 'standard'}
384 The highest level of the system is the evolution function which combines
385 the results of a stepping function and control function to reliably
386 advance the solution forward over an interval @math{(t_0, t_1)}. If the
387 control function signals that the step-size should be decreased the
388 evolution function backs out of the current step and tries the proposed
389 smaller step-size. This process is continued until an acceptable
392 @deftypefun {gsl_odeiv_evolve *} gsl_odeiv_evolve_alloc (size_t @var{dim})
393 This function returns a pointer to a newly allocated instance of an
394 evolution function for a system of @var{dim} dimensions.
397 @deftypefun int gsl_odeiv_evolve_apply (gsl_odeiv_evolve * @var{e}, gsl_odeiv_control * @var{con}, gsl_odeiv_step * @var{step}, const gsl_odeiv_system * @var{dydt}, double * @var{t}, double @var{t1}, double * @var{h}, double @var{y}[])
398 This function advances the system (@var{e}, @var{dydt}) from time
399 @var{t} and position @var{y} using the stepping function @var{step}.
400 The new time and position are stored in @var{t} and @var{y} on output.
401 The initial step-size is taken as @var{h}, but this will be modified
402 using the control function @var{c} to achieve the appropriate error
403 bound if necessary. The routine may make several calls to @var{step} in
404 order to determine the optimum step-size. An estimate of
405 the local error for the step can be obtained from the components of the array @code{@var{e}->yerr[]}.
406 If the step-size has been changed the value of @var{h} will be modified on output.
407 The maximum time @var{t1} is guaranteed not to be exceeded by the time-step. On the
408 final time-step the value of @var{t} will be set to @var{t1} exactly.
410 If the user-supplied functions defined in the system @var{dydt} return a
411 status other than @code{GSL_SUCCESS} the step will be aborted. In this
412 case, @var{t} and @var{y} will be restored to their pre-step values
413 and the error code from the user-supplied function will be returned. To
414 distinguish between error codes from the user-supplied functions and
415 those from @code{gsl_odeiv_evolve_apply} itself, any user-defined return
416 values should be distinct from the standard GSL error codes.
419 @deftypefun int gsl_odeiv_evolve_reset (gsl_odeiv_evolve * @var{e})
420 This function resets the evolution function @var{e}. It should be used
421 whenever the next use of @var{e} will not be a continuation of a
425 @deftypefun void gsl_odeiv_evolve_free (gsl_odeiv_evolve * @var{e})
426 This function frees all the memory associated with the evolution function
430 @cindex discontinuities, in ODE systems
431 Where a system has discontinuous changes in the derivatives at known
432 times it is advisable to evolve the system between each discontinuity
433 in sequence. For example, if a step-change in an external driving
434 force occurs at times @math{t_a, t_b, t_c, \dots} then evolving over
435 the ranges @math{(t_0,t_a)}, @math{(t_a,t_b)}, @dots{},
436 @math{(t_c,t_1)} is more efficient than using the single range
439 Evolving the system directly through a discontinuity with a strict
440 tolerance may result in extremely small steps being taken at the edge
441 of the discontinuity (e.g. down to the limit of machine precision).
442 In this case it may be necessary to impose a minimum step size
443 @code{hmin} suitable for the problem:
448 gsl_odeiv_evolve_apply (e, c, s, &sys, &t, t1, &h, y);
449 if (h < hmin) @{ h = hmin; @} ;
452 The value of @var{h} returned by @code{gsl_odeiv_evolve_apply} is
453 always a suggested value and can be modified whenever needed.
455 @node ODE Example programs
457 @cindex Van der Pol oscillator, example
458 The following program solves the second-order nonlinear Van der Pol
463 x''(t) + \mu x'(t) (x(t)^2 - 1) + x(t) = 0
470 x''(t) + \mu x'(t) (x(t)^2 - 1) + x(t) = 0
475 This can be converted into a first order system suitable for use with
476 the routines described in this chapter by introducing a separate
477 variable for the velocity, @math{y = x'(t)},
483 y' &= -x + \mu y (1-x^2)
492 y' = -x + \mu y (1-x^2)
497 The program begins by defining functions for these derivatives and
501 @verbatiminclude examples/ode-initval.c
505 For functions with multiple parameters, the appropriate information
506 can be passed in through the @var{params} argument using a pointer to
509 The main loop of the program evolves the solution from @math{(y, y') =
510 (1, 0)} at @math{t=0} to @math{t=100}. The step-size @math{h} is
511 automatically adjusted by the controller to maintain an absolute
512 accuracy of @c{$10^{-6}$}
513 @math{10^@{-6@}} in the function values @var{y}.
517 @center @image{vdp,3.4in}
518 @center Numerical solution of the Van der Pol oscillator equation
519 @center using Prince-Dormand 8th order Runge-Kutta.
523 To obtain the values at user-specified positions, rather than those
524 chosen automatically by the control function, the main loop can be modified
525 to advance the solution from one chosen point to the next. For example, the
526 following main loop prints the solution at the points @math{t_i = 0,
530 for (i = 1; i <= 100; i++)
532 double ti = i * t1 / 100.0;
536 gsl_odeiv_evolve_apply (e, c, s,
542 printf ("%.5e %.5e %.5e\n", t, y[0], y[1]);
547 Note that arbitrary values of @math{t_i} can be used for each stage of
548 the integration. The equally spaced points in this example are just
549 used as an illustration.
551 It is also possible to work with a non-adaptive integrator, using only
552 the stepping function itself. The following program uses the @code{rk4}
553 fourth-order Runge-Kutta stepping function with a fixed stepsize of
557 @verbatiminclude examples/odefixed.c
561 The derivatives must be initialized for the starting point @math{t=0}
562 before the first step is taken. Subsequent steps use the output
563 derivatives @var{dydt_out} as inputs to the next step by copying their
564 values into @var{dydt_in}.
566 @node ODE References and Further Reading
567 @section References and Further Reading
569 Many of the basic Runge-Kutta formulas can be found in the Handbook of
570 Mathematical Functions,
574 Abramowitz & Stegun (eds.), @cite{Handbook of Mathematical Functions},
579 The implicit Bulirsch-Stoer algorithm @code{bsimp} is described in the
584 G. Bader and P. Deuflhard, ``A Semi-Implicit Mid-Point Rule for Stiff
585 Systems of Ordinary Differential Equations.'', Numer.@: Math.@: 41, 373--398,