Download odeint-v2.zip - 810.97 KB
## Introduction

This article introduces the second version of **odeint** - a C++ framework for solving ordinary differential equation (ODEs). It is based on template metaprogramming, is independent of a specific container type and can be used with modern graphic cards. Under to hood of odeint a lot of stuff has changed in the current version. It is not directly compatible with the first version but only slight changes have to be done to make version one and version two compatible. A brief introduction of the first version of odeint has been given here at codeproject.

The current version of odeint-v2 can be obtained from github. The full documentation includes a lot of tutorials and example and also can be found at github. odeint has been supported by the Google summer of code program. It is planned to release the odeint within the boost libraries, although it is not clear if odeint will pass the review process of boost.

## Background

Solving ordinary differential equations is a very import task in mathematical modeling of physical, chemical, biological and even social systems. There is a long tradition of analyzing the methods of solving ODEs. These methods are summarized within the mathematical discipline Numerical Analysis. An ordinary differential equation has always the form:

d x(t) / d t = f( x(t) , t )

x is said to be the state of the ODE and can be vector wise. t is the dependent variable. We will call t always time although it can have a different (physical) meaning. The function f defines the ODE. Associated with every ODE is an initial value problem (IVP) that is given by the ODE plus an initial value x(t0)=x0. The solution of the initial value problem is the temporal evolution of x(t), with the additional condition that x(t0)=x0. It can be shown that for most of the physically relevant ODEs, every IVP has a unique solution. For more details see the background section in the article of the first version or have a look at some classical textbooks about differential equations or numerical analysis.

## What's new?

**Introduction of algebras and operations.** They let you change the way how the basic mathematical operations are performed. With these concepts it is possible to solve ordinary differential equations with CUDA or by using sophisticated numerical libraries like the Intel Math Kernel library or the SIMD library.
**Dense output.** Methods with dense output are now supported by odeint. Dense output means that the solution is interpolated between intermediate points usually with the same order as the numerical solution. An own concept for dense output steppers has been introduced and the integrate functions support methods with dense output.
**Meta functions and classes for the classical Runge-Kutta steppers.** A new concept of generalizing the Runge-Kutta solvers has been introduced. This concept is based heavily on template metaprogramming. A specific solver is only defined by its coefficients. The performance of this methods is equal to the hand-written versions.
**New methods.** New methods has been introduced, like the Dormand-Prince 5(4) method, solvers for stiff systems like Rosenbrock-4 and multi step methods like Adams-Bashforth-Moulton. One attempt also tries to implement the Taylor series method.
**Support for fancy libraries.** odeint plays well with Boost.Range and Boost.Units. The first one lets you solve only a part of a full state while the second library performs compile-time dimension checks of a mathematical equation.
**Factory functions.** For easy generation of the controlled steppers and dense-output steppers.

## Examples

Before going into the details of odeint we will show some examples.

### Integration of the Lorenz system

Lets start with a examples from the previous articles -- the famous Lorenz system with its butterfly-like wing structure. Using odeint is very simple, just include the appropriate header files. odeint is completely header only. You will not have to link against other libs. This has the advantage that the compiler can do powerful optimization techniques resulting in a very fast run-time code. To use odeint just include

#include <boost/numeric/odeint.hpp>

The whole library lives in the namespace `boost::numeric::odeint`

:

using namespace boost::numeric::odeint;

The definition of the Lorenz system is again either a function or a functor. But before declaring the ODE we need to define the state type, hence the type which odeint uses inside its steppers to store intermediate values and which is used in the system functions.

typedef std::vector< double > state_type;
const double sigma = 10.0;
const double R = 28.0;
const double b = 8.0 / 3.0;
void lorenz( state_type &x , state_type &dxdt , double t )
{
dxdt[0] = sigma * ( x[1] - x[0] );
dxdt[1] = R * x[0] - x[1] - x[0] * x[2];
dxdt[2] = x[0]*x[1] - b * x[2];
}
class lorenz_class
{
public:
void operator()( state_type &x , state_type &dxdt , double t )
{
dxdt[0] = sigma * ( x[1] - x[0] );
dxdt[1] = R * x[0] - x[1] - x[0] * x[2];
dxdt[2] = x[0]*x[1] - b * x[2];
}
};

Now, we can in principle start to solve the Lorenz system. First, we have to choose a stepper. Lets start with classical Runge Kutta stepper of fourth order. We integrate the system for 1000 steps with a step size of 0.01.

state_type x( 3 );
x[0] = x[1] = x[2] = 10.0;
const double dt = 0.01;
integrate_const( runge_kutta4< state_type >() , lorenz , x , 0.0 , 10.0 , dt );

Besides the `integrate_const`

function you can also call the stepper directly. This might have some advantages if you want to perform non-trivial code between consecutive steps, but we recommend you to use the integrate functions and observers whenever possible. In the case of controlled and dense output steppers the integrate functions implement some complicate logic needed for a proper integration. Nevertheless the above code is exactly equivalent to

state_type x( 3 );
x[0] = x[1] = x[2] = 10.0;
const double dt = 0.01;
runge_kutta4< state_type > rk4;
double t = 0.0;
for( size_t i=0 ; i<1000 ; ++i,t+=dt )
{
rk4.do_step( lorenz , x , t , dt );
}

In most situation you want to do something with the state during the integration, for example write the state into a file or perform some fancy statistical methods. You can simply create an observer and call the integrate function with this observer.

struct streaming_observer
{
std::ostream &m_out;
streaming_observer( std::ostream &out ) : m_out( out ) {}
void operator()( const state_type &x , double t ) const
{
m_out << t;
for( size_t i=0 ; i < x.size() ; ++i )
m_out << "\t" << x[i];
m_out << "\n";
}
};
integrate_const( runge_kutta4< state_type >() , lorenz , x , 0.0 , 10.0 , dt , streaming_observer( std::cout ) );

In the previous examples the stepper was always a fourth order Runge Kutta stepper. This stepper is really nice. It is well known and widely used. But it does not have step size control not to mention dense output. If you want to use a controlled stepper which does an adaptive integration of you ODE you need a controlled stepper. For example you can use the Runge Kutta Cash Karp method

typedef controlled_runge_kutta< runge_kutta_cash_karp54< state_type > > stepper_type;
integrate_adaptive( stepper_type() , lorenz , x , 0.0 , 10.0 , dt , streaming_observer( cout ) );

The `runge_kutta_cash_karp54< state_type >`

is a simple error stepper which estimates the error made during one step. This stepper is used within the `controlled_error_stepper`

which implements step size control on the basis of the error estimated from the Runge-Kutta Cash Karp stepper. `integrate_adaptive`

performs the adaptive integration. This results in changes of the step size during time evolution.

Another error stepper is the so called Dormand-Prince 5(4) stepper that also possess dense-output functionality. To use it you need to nest the controller in a general dense output stepper:

typedef controlled_runge_kutta< runge_kutta_dopri5< state_type > > dopri_stepper_type;
typedef dense_output_runge_kutta< dopri_stepper_type > dense_stepper_type;
integrate_const( dense_stepper_type() , lorenz , x , 0.0 , 10.0 , dt , streaming_observer( std::cout ) );

Now, we have again used `integrate_const`

, which uses all features of the dense output stepper. In principle one could also use a controlled stepper like that one from the previous example in `integrate_const`

. But in this case the adaptive integration is performed exactly to the time points 0, dt, 2dt, 3dt, ... For small dt one would loose a lot of performance.

Of course, one can also tune the precision for the step size control, hence a measure which determines if the step made is to large or to small. For the example of the Runge-Kutta Cash-Karp stepper the precision is tuned via

double abs_error = 1.0e-10 , rel_error = 1.0e-10;
integrate_adaptive( stepper_type( default_error_checker< double >( abs_error , rel_error ) ) , lorenz , x , 0.0 , 10.0 , dt , streaming_observer( std::cout ) );

`abs_error`

determines the tolerance of the absolute error while `rel_error`

is the tolerance of the relative error.

### Factory functions

To ease the creation of the controlled steppers or the dense output steppers factory functions have been introduced. For example, a dense output stepper of the Dormand-Prince 5(4) stepper can easily be created by using:

integrate_const( make_dense_output< runge_kutta_dopri5< vector_type > >( 1.0e-6 , 1.0e-6 ) , system , x , t1 , t2 , dt );

The two parameters of the

`make_dense_output`

are the absolute and the relative error tolerances. For the controlled steppers the factory function is called

`make_controlled`

and can be used in exactly the same way as

`make_dense_output`

.

### Solving ordinary differential equations on the GPU

The second example shows how odeint can be used to solve ordinary differential equations on the GPU. Of course not every problem is well suited to be ported on the GPU. For example the Lorenz in the last section is not a good candidate, it is simply to small. Using the GPU makes only sense for very large systems. One call on the GPU is slow, but this single call can perform a huge number of operations in parallel. Examples for problems which might suitable for the GPU are large lattices, like discretizations of partial differential equations (PDEs) or ensembles of ODEs.

In the example we will show how one can integrate a whole ensemble of uncoupled Lorenz systems where one parameter is varied. This example can be used for parameter studies. In the tutorial of the official documentation the same example is explain in more detail, here we only show how the GPU can be used.

At the moment odeint supports CUDA via Thrust. In principle it is also possible to use OpenCL, but this method is not yet implemented. Thrust is a STL like interface for the native CUDA API. If follows a functional programming paradigm. The standard way of using it is to call a general algorithm like `thrust::for_each`

with a specific functor performing the basic operation. An example is

struct add_two
{
template< class T >
__host__ __device__
void operator()( T &t ) const
{
t += T( 2 );
}
};
thrust::device_vector< int > x( N );
thrust::for_each( x.begin() , x.end() , add_two() );

which generically adds 2 to every element of the container `x`

. Another very nice feature of thrust is that the code can be used without modification with OpenMP, hence you can distribute your computations on the cores of your CPU. All you have to do to use this feature is to compile your code with ```
-Xcompiler -fopenmp -DTHRUST_DEVICE_BACKEND=THRUST_DEVICE_BACKEND_OMP
```

.

.

To use Thrust with odeint you have to:

- use
`thrust::device_vector< double >`

as state type,
- use a specialized algebra and specialized operations, namely
`thrust_algebra`

, `thrust_operations`

(wait a second to see how this works),
- Making your system function thrust-ready.

The first to points can easily be solved by defining the right state type and the right stepper:

typedef double value_type;
typedef thrust::device_vector< double > state_type;
typedef runge_kutta4< state_type , value_type , state_type , value_type , thrust_algebra , thrust_operations > stepper_type;

The third point is the most difficult one, although thrust make this task fairly easy. We want to integrate an ensemble of N Lorenz systems, where every subsystem has a different parameter R. The dimension of a single Lorenz system is three, hence the dimension of the state type is 3*N. We will use a memory layout, where the first N entries in the state vector are the x components, the second N entries are the y components and the third N entries are the z components. Furthermore, we want to vary the parameter beta in the Lorenz system. Therefore, we also need a vector with N entries for the parameter. The layout of our Lorenz class is

lorenz_system
{
lorenz_system( size_t N , const state_type &beta )
: m_N( N ) , m_beta( beta ) { }
void operator()( const state_type &x , state_type &dxdt , double t ) const
{
}
size_t m_N;
const state_type &m_beta;
};

Now, we are ready to compute `dxdt`

. To do so we use a very nice feature of thrust - the zip iterator. Zip iterators allows us to iterate a whole bunch of vectors at once and perform one operation on this bunch.

lorenz_system
{
void operator()( const state_type &x , state_type &dxdt , double t ) const
{
thrust::for_each(
thrust::make_zip_iterator( thrust::make_tuple(
x.begin() , x.begin() + m_N , x.begin() + 2 * m_N ,
m_beta.begin() ,
dxdt.begin() , dxdt.begin() + m_N , dxdt.begin() + 2 * m_N ) ) ,
thrust::make_zip_iterator( thrust::make_tuple(
x.begin() + m_N , x.begin() + 2 * m_N , x.begin() + 3 * m_N ,
m_beta.begin() ,
dxdt.begin() + m_N , dxdt.begin() + 2 * m_N , dxdt.begin() + 3 * m_N ) ) ,
lorenz_functor() );
}
};

So, we perform an iteration over x,y,z,R,dxdt,dydt,dzdt. The values are put inside a thrust::tuple. During the iteration `lorenz_functor()`

is called with the tuple x,y,z,R,dxdt,dydt,dzdt. Everything we need to know about one single system is given in this tuple and we can calculate the the r.h.s. of the Lorenz system:

struct lorenz_system
{
struct lorenz_functor
{
template< class T >
__host__ __device__
void operator()( T t ) const
{
value_type R = thrust::get< 3 >( t );
value_type x = thrust::get< 0 >( t );
value_type y = thrust::get< 1 >( t );
value_type z = thrust::get< 2 >( t );
thrust::get< 4 >( t ) = sigma * ( y - x );
thrust::get< 5 >( t ) = R * x - y - x * z;
thrust::get< 6 >( t ) = -b * z + x * y ;
}
};
};

This is all. Now, we initialize the state and beta and perform the integration:

vector< value_type > beta_host( N );
const value_type beta_min = 0.0 , beta_max = 56.0;
for( size_t i=0 ; i < N ; ++i )
beta_host[i] = beta_min + value_type( i ) * ( beta_max - beta_min ) / value_type( N - 1 );
state_type beta = beta_host;
state_type x( 3 * N );
thrust::fill( x.begin() , x.begin() + 3 * N , 10.0 );
lorenz_system lorenz( N , beta );
integrate_const( stepper_type() , lorenz , x , 0.0 , 10.0 , dt );

Of course for a real world application this example is not very interesting since nothing happens with this ensemble of Lorenz systems. In the tutorial section of odeint the same example is used to calculate the largest Lyapunov exponent for every single system. With the Lyapunov exponent it is possible to identify different regions in parameter space. For the Lorenz system regions with chaotic, regular (oscillating) and vanishing dynamics can be identified. The method of using thrust and CUDA for parameter studies is very efficient since many systems are solved in parallel. One can easily gain the performance by a factor of 20 or even more in comparison with the CPU.

## Design and structure

In the previous section we have seen two examples of how odeint can be used to solve ordinary differential equations. This section is devoted to the technical details of odeint. Furthermore it gives an overview over existing methods and steppers in odeint.

odeint consists in principle of four parts:

- Integrate functions
- Steppers
- Algebras and operations
- Utilities

The integrate functions are responsible to perform the iteration of the ODE. They do the step size control and they might make use of dense output. The integrate functions come along with an observer concept which lets you observe your solution during the iteration. The steppers are the main building blocks of odeint. Here, the methods are implemented. Several steppers for different purposes exists, like

- the classical Runge-Kutta steppers,
- symplectic steppers for Hamiltonian systems,
- implicit methods, and
- stiff solvers.

The algebras and operations build an abstraction layer which lets you change the way how odeint performs basic numerical manipulations like addition, multiplication, etc. In the utility part you will find functionality like resizing and copying of state types. In the following we will introduce the several parts in more detail.

### Steppers concepts

A stepper in odeint performs one single step. Several different stepper types exist in odeint. The most simple one is described by the Stepper concept. It has only one method `do_step`

which performs the step. An example is the classical Runge-Kutta stepper of fourth order.

runge_kutta4< state_type > rk4;
rk4.do_step( system , x , t , dt );

Most of the stepper have a set of template parameters which tune their behavior. In the Runge-Kutta 4 example above you have explicitly to state the state type. This type is also used to store intermediate values. For the same stepper a lot of other parameter exist which are called with some default values. For example you can explicitly state the value type which gives the numerical values which is used during computations, the derivative type, the time type, the algebra and the operations. Furthermore, a policy parameter for resizing exists. In most cases the default parameters are a good choice. For exotic applications like using Boost.Units or for exotic algebras you need to specify them explicitly. The documentation of odeint shows some examples.

The first parameter of the `do_step`

method specifies the ODE. It is expected that the system is either a function or functor and must be a callable object with the signature

system( x , dxdt , t )

Another concept is the ErrorStepper. Its basic usage is very similar to the Stepper concept:

runge_kutta54_ck< state_type > rk54;
rk54.do_step( system , x , t , dt , xerr );

The main difference is that it estimates the error made during one step and stores this error in `xerr`

. Another concept is the ControlledStepper concept, which tries to perform one step. An example for the ControlledStepper concept is the `controlled_runge_kutta`

which has one template argument, specifying the underlying Runge-Kutta stepper. For example, the Runge-Kutta-Cask-Karp stepper can be used via:

controlled_runge_kutta< runge_kutta54_ck< state_type > > stepper;
controlled_step_result res = stepper.try_step( system , x , t , dt );

A stepper which models the ControlledStepper concept has usually some error bounds which must be fulfilled. If `try_step`

is called the stepper will perform one step with a given step size `dt`

and checks the error made during this step. If the error is within the desired tolerances it accepts the step and possibly increases the step size for the next evaluation. In case the error bounds are reached the step is rejected and the step size is decreased.

The fourth stepper concept is the DenseOutputStepper. A dense output stepper has two basic method `do_step`

and `calc_state`

. The first one performs one step. The state of the ODE is managed internally and usually this step is made adaptive, hence it is performed by a controlled stepper. The second function calculates intermediate values of the solution. Usually, the precision of the intermediate values are of the same order as the solution. An example is the Dormand-Prince stepper, which can be used like

typedef controlled_runge_kutta< runge_kutta_dopri5< state_type > > stepper_type;
dense_output_runge_kutta< stepper_type > stepper;
stepper.initialize( x0 , t0 , dt0 );
std::pair< double , double > time_interval = stepper.do_step( system );
stepper.calc_state( ( time_interval.first + time_interval.second ) / 2.0 , x );

### Integrate functions

The integrate function are responsible for integrating the ODE with a specified stepper. Hence, they do the work of calling the `do_step`

or `try_step`

methods for you. This is especially useful if your are using a controlled stepper or a dense output stepper, since in this case you need some logic of calling the stepping method. Of course, you can always call the stepping methods by hand. If you are using a classical stepper without step size control and dense output functionality this might even be simpler than calling an integrate method.

odeint defines four different integrate methods.

`integrate_const`

`integrate_adaptive`

`integrate times`

`integrate_n_steps`

`integrate_const`

performs the integration with a constant step size. At each step an observer can be called. An example is

struct streaming_observer
{
std::ostream &m_out;
streaming_observer( std::ostream &out ) : m_out( out ) { };
void operator()( const state_type &x , double t ) const
{
m_out << t;
for( size_t i=0 ; i < x.size() ; ++i )
m_out << "\t" << x[i];
}
};
integrate_const( stepper , system , x , t_start , t_end , dt , streaming_observer( cout ) );

`integrate_const`

is implemented to use all features of the stepper. If you call it with a dense output stepper it will perform the stepping and calculate the state with the help of the dense output functionality. The same holds for the controlled steppers. Here the step size is adapted at each step.

The other integrate functions are similar to the `integrate_const`

function. For example, `integrate_adaptive`

performs adaptive integration of the ODE. `integrate_times`

expects a range of times where you want to obtain the solution and `integrate_n_steps`

integrates exactly n steps. This methods only works for the Stepper concept.

### Algebras and operations

The algebra and operations introduce a degree of freedom which allows you to change and to adjust the way how odeint performs the basic numerical computations. In general an algebra defines how you can iterate over a set of state types while the operations perform the numerical operations like addition, subtraction, multiplication, etc. with the entries of the state type. Of course, the algebras and operations which can be used depend strongly on the state type.

Not all steppers provide the possibility to change algebras and operations. For example the Rosenbrock stepper and the implicit Euler stepper don't. They need to do matrix computations and thus depend on `boost::numeric::ublas::vector`

and `boost::numeric::ublas::matrix`

which provide basic vector and matrix algorithms. But the Runge-Kutta stepper as well as the symplectic and the multi-step methods use the algebras and operations.

Each algebra consists of a set of member functions`for_each1( c1 , op )`

, `for_each2( c1 , c2 , op )`

, `for_each3( c1 , c2 , c3 , op )`

, ... and `accumulate( c1 , op )`

. The variables `c1`

, `c2`

, `c3`

, ... represent the state of the ODE or its first derivative. They are usually of the same type as the `state_type`

of the stepper, but they must not. `op`

is a function or functor which is performed on the elements of the state. In most cases this functor is taken from the operations. An operation is said to be a class with appropriate member structs `scale_sum1`

, `scale_sum2`

, `scale_sum3`

, ... Each of this structs possesses an `operator()`

that performs the operation. For more details about algebras and operations consult the documentation of odeint or read the source code.

The algebras and operations enter the stepper as additional template parameters. For example, all of the Runge-Kutta steppers have the following parameters:

runge_kutta_stepper< state_type , value_type , deriv_type , time_type , algebra , operations >

Here, `state_type`

represents the state of the ODE and `value_type`

is the basic numerical type like `double`

or `complex<double>`

. `deriv_type`

represents the derivative of the state type hence the left hand side of the main equations while `time_type`

is the type of the dependent variable. In most cases `deriv_type`

is the same type as `state_type`

and `time_type`

as `value_type`

. The next two parameters are then the algebra and the operations. All types except the `state_type`

have default values chosen in a way suitable for most cases.

An example where all of these types are different is Boost.Units which can be used to implement compile-time validation of the dimension of an equations, see the tutorial in the odeint docs. Another example for algebras and operations different then the default one is the above example of thrust, which requires a special `thrust_algebra`

as well as special `thrust_operations`

. There are other algebras for example for the Intel math kernel library or you can easily implement your own algebra working with your own container.

### The Runge-Kutta meta algorithm

For the implementation of the explicit Runge-Kutta steppers a new method using template metaprogramming is used. With this method a Runge-Kutta algorithm is only specified by its coefficients. Of course, it is also possible to use classical programming techniques for this task, but one will certainly use performance. The template metaprogramming algorithm avoids this downfall. Only compile-time container and algorithms are used here, such that the compiler can perform powerful optimization. The performance of these algorithms is comparable to its hand-written version, see also the next section. The complete algorithm is described in detail here (in German only) [5], and here [7].

## Performance

The performance of odeint has been carefully checked against existing solutions like the methods of the GNU Scientific library (gsl) or Numerical Recipes (NR). It turned out that the abstraction introduced in odeint does not affect its performance assuming that a modern C++ compiler with optimization is used. Performance result for the Lorenz system and `boost::array`

as state type are shown in the figure below. The percentage values are related to the first version of odeint. odeint stands for the first version of odeint, generic for the current implementation in odeint v2, NR for the methods from the Numerical Recipes, gsl for the GNU Scientific library and rt_gen for a generic run-time implementation of the method under consideration.

## Summary and outlook

In this article odeint v2 has been introduced - a high-level C++ library for solving ordinary differential equations. Its main enhancement over the first version are the use of algebras and operations, two concepts which allow one to change the way the basic operations in odeint are performed. Using these techniques one can solve ODEs on modern graphic cards, or one can use compile-time containers and advanced dimension-analysis classes like Boost.Fusion and Boost.Units. Furthermore, more numerical methods have been implemented and dense-output functionality has been introduced.

The following table shows all steppers which are currently implemented in odeint

**Method** |
**Class name** |
**Comment** |

Explicit Euler |
`euler` |
Very simple, only for demonstrating purpose |

Runge-Kutta 4 |
`runge_kutta4` |
The classical Runge Kutta scheme, good general scheme without error control |

Cash-Karp |
`runge_kutta_cash_karp54` |
Good general scheme with error estimation |

Dormand-Prince 5 |
`runge_kutta_dopri5` |
Standard method with error control and dense output |

Fehlberg 78 |
`runge_kutta_fehlberg78` |
Good high order method with error estimation |

Adams-Bashforth-Moulton |
`adams_bashforth_moulton` |
Multi-step method with high performance |

Controlled Error Stepper |
`controlled_runge_kutta` |
Error control for the Runge-Kutta steppers |

Dense Output Stepper |
`dense_output_runge_kutta` |
Dense output for the Runge-Kutta steppers |

Bulirsch-Stoer |
`bulirsch_stoer` |
Stepper with step size, order control and dense output. Very good if high precision is required. |

Implicit Euler |
`implicit_euler` |
Basic implicit routine |

Rosenbrock 4 |
`rosenbrock4` |
Solver for stiff systems with error control and dense output |

Symplectic Euler |
`symplectic_euler` |
Basic symplectic solver for separable Hamiltonian system |

Symplectic RKN McLachlan |
`symplectic_rkn_sb3a_mclachlan` |
Symplectic solver for separable Hamiltonian system with order 6 |

odeint is currently under heavy development. Therefore, some class names and include files might still change and new features and methods might be implemented. The source code provided with this article is a snapshot of the current development.

If you like to contribute to this library, or have some interesting points, criticism, or suggestions, you are cordially invited.

## References

- [1] - Ernst Hairer, Syvert P. Nørsett, and Gerhard Wanner, Solving Ordinary Differential Equations I: Nonstiff Problems, 2nd ed., 2009.
- [2] - Ernst Hairer, and Gerhard Wanner, (Author) Solving Ordinary Differential Equations II: Stiff and Differential-Algebraic Problems, 2nd ed., 2010
- [3] - W. H. Press, S. T. Teukolsky, W. T. Vetterling, and B. P. Flannery. Numerical Recipes in C: the Art of Scientific Computing. 1992.
- [4] - odeint.aspx
- [5] - Numerics and Template-Metaprogramming (Only in German)
- [6] - Karsten Ahnert, Mario Mulansky, Odeint - Solving ordinary differential equations in C++, IP Conf. Proc., Volume 1389, pp. 1586-1589, 2011, Arxiv link
- [7] - Mario Mulansky, Karsten Ahnert, Metaprogramming Applied to Numerical Problems, IP Conf. Proc., Volume 1389, pp. 1582-1585, 2011, Arxiv link
- [8] - A. Alexandrescu. Modern C++ Design. 2001.

## History

- 19. October 2011 - Initial document.