Next: Running Time, Previous: Diagnostics, Up: ode
This discussion is necessarily incomplete. Entire books exist on any
subject mentioned below (e.g., floating point error). Our goals are
modest: first, to introduce the basic notions of error analysis as they
apply to ode
; second, to steer you around the more obvious
pitfalls. You should look through a numerical analysis text (e.g.,
Atkinson's Introduction to Numerical Analysis) before beginning
this discussion.
We begin with some key definitions. The error of greatest concern is the difference between the actual solution and the numerical approximation to the solution; this is termed the accumulated error, since the error is built up during each numerical step. Unfortunately, an estimate of this error is usually not available without knowledge of the actual solution. There are, however, several more usable notions of error. The single-step error, in particular, is the difference between the actual solution and the numerical approximation to the solution after any single step, assuming the value at the beginning of the step is correct.
The relative single-step error is the single-step error, divided
by the current value of the numerical approximation to the solution.
Why not divided by the current value of the solution itself? The reason
is that the solution is not exactly known. When free to choose a
stepsize, ode
will do so on the basis of the relative single-step
error. By default, it will choose the stepsize so as to maintain an
accuracy of eight significant digits in each step. That is, it will
choose the stepsize so as not to violate an upper bound of
10^(-9) on the relative single-step error. This ceiling may be
adjusted with the ‘-r’ option.
Where does numerical error come from? There are two sources. The first
is the finite precision of machine computation. All computers work with
floating point numbers, which are not real numbers, but only an
approximation to real numbers. However, all computations performed by
ode
are done to double precision, so floating point error tends
to be relatively small. You may nonetheless detect the difference
between real numbers and floating point numbers by experimenting with
the ‘-p 17’ option, which will print seventeen significant digits.
On most machines, that is the precision of a double precision
floating point number.
The second source of numerical error is often called the
theoretical truncation error. It is the difference between
the actual solution and the approximate solution due solely to the
numerical scheme. At the root of many numerical schemes is an infinite
series; for ordinary differential equations, it is a Taylor
expansion. Since the computer cannot compute all the terms in an
infinite series, a numerical scheme necessarily uses a truncated
series; hence the term. The single-step error is the sum of the
theoretical truncation error and the floating point error, though in
practice the floating point error is seldom included. The single-step
error estimated by ode
consists only of the theoretical
truncation error.
We say that a numerical scheme is stable, when applied to a particular initial value problem, if the error accumulated during the solution of the problem over a fixed interval decreases as the stepsize decreases; at least, over a wide range of step sizes. With this definition both the Runge–Kutta–Fehlberg (‘-R’) scheme and the Adams–Moulton (‘-A’) scheme are stable (a statement based more on experience than on theoretical results) for a wide class of problems.
After these introductory remarks, we list some common sources of accumulated error and instability in any numerical scheme. Usually, problems with large accumulated error and instability are due to the single-step error in the vicinity of a `bad' point being large.
ode
should not be used to generate a numerical solution on any
interval containing a singularity. That is, ode
should not be
asked to step over points at which the system of differential equations
is singular or undefined.
You will find the definitions of singular point, regular singular point,
and irregular singular point in any good differential equations text.
If you have no favorite, try Birkhoff and Rota's Ordinary
Differential Equations, Chapter 9. Always locate and classify the
singularities of a system, if any, before applying ode
.
For ode
to yield an accurate numerical solution on an interval,
the true solution must be defined and well-behaved on that interval.
The solution must also be real. Whenever any of these conditions is
violated, the problem is said to be ill-posed. Ill-posedness may
occur even if the system of differential equations is well-behaved on
the interval. Strange results, e.g., the stepsize suddenly shrinking to
the machine limit or the solution suddenly blowing up, may indicate
ill-posedness.
As an example of ill-posedness (in fact, an undefined solution) consider the innocent-looking problem:
y' = y^2 y(1) = -1
The solution on the domain t > 0 is
y(t) = -1/t.
With this problem you must not compute a numerical solution on any interval that includes t=0. To convince yourself of this, try to use the ‘step’ statement
step 1, -1
on this system. How does ode
react?
As another example of ill-posedness, consider the system
y'=1/y
which is undefined at y=0. The general solution is
y = +/- (2(t-C))^(1/2),
so that if the condition y(2)=2 is imposed, the solution will be (2t)^(1/2). Clearly, if the domain specified in a ‘step’ statement includes negative values of t, the generated solution will be bogus.
In general, when using a constant stepsize you should be careful not to
`step over' bad points or bad regions. When allowed to choose a
stepsize adaptively, ode
will often spot bad points, but not
always.
An autonomous system is one that does not include the independent variable explicitly on the right-hand side of any differential equation. A critical point for such a system is a point at which all right-hand sides equal zero. For example, the system
y' = 2x x' = 2y
has only one critical point, at (x,y) = (0,0).
A critical point is sometimes referred to as a stagnation point. That is because a system at a critical point will remain there forever, though a system near a critical point may undergo more violent motion. Under some circumstances, passing near a critical point may give rise to a large accumulated error.
As an exercise, solve the system above using ode
, with the
initial condition x(0) = y(0) = 0. The solution should be
constant in time. Now do the same with points near the critical point.
What happens?
You should always locate the critical points of a system before
attempting a solution with ode
. Critical points may be
classified (as equilibrium, vortex, unstable, stable, etc.) and this
classification may be of use. To find out more about this, consult
any book dealing with the qualitative theory of differential equations
(e.g., Birkhoff and Rota's Ordinary Differential Equations,
Chapter 6).
If the results produced by ode
are bad in the sense that
instability appears to be present, or an unusually small stepsize needs
to be chosen needed in order to reduce the single-step error to
manageable levels, it may simply be that the numerical scheme being used
is not suited to the problem. For example, ode
currently has
no numerical scheme which handles so-called `stiff' problems very well.
As an example, you may wish to examine the stiff problem:
y' = -100 + 100t + 1 y(0) = 1
on the domain [0,1]. The exact solution is
y(t) = e^(-100t) + t.
It is a useful exercise to solve this problem with ode
using
various numerical schemes, stepsizes, and relative single-step error
bounds, and compare the generated solution curves with the actual
solution.
There are several rough and ready heuristic checks you may perform on
the accuracy of any numerical solution produced by ode
. We
discuss them in turn.
That is, check how changing the stepsize affects a solution curve. As the stepsize decreases, the curve should converge. If it does not, then the stepsize is not small enough or the numerical scheme is not suited to the problem. In practice, you would proceed as follows.
The following example is one that you may wish to experiment with. Make a file named qcd containing:
# an equation arising in QCD (quantum chromodynamics) f' = fp fp' = -f*g^2 g' = gp gp' = g*f^2 f = 0; fp = -1; g = 1; gp = -1 print t, f step 0, 5
Next make a file named stability, containing the lines:
: sserr is the bound on the relative single-step error for sserr do ode -r $sserr < qcd done | spline -n 500 | graph -T X -C
This is a `shell script', which when run will superimpose numerical solutions with specified bounds on the relative single-step error. To run it, type:
sh stability 1 .1 .01 .001
and a plot of the solutions with the specified error bounds will be drawn. The convergence, showing stability, should be quite illuminating.
Many systems have invariant quantities. For example, if the system is a mathematical model of a `conservative' physical system then the `energy' (a particular function of the dependent variables of the system) should be constant in time. In general, knowledge about the qualitative behavior of any dependent variable may be used to check the quality of the solution.
A rough idea of how error is propagated is obtained by viewing a family of solution curves about the numerical solution in question, obtained by varying the initial conditions. If they diverge sharply—that is, if two solutions which start out very close nonetheless end up far apart—then the quality of the numerical solution is dubious. On the other hand, if the curves do not diverge sharply then any error that is present will in all likelihood not increase by more than an order of magnitude or so over the interval. Problems exhibiting no sharp divergence of neighboring solution curves are sometimes called well-conditioned.