Click here to Skip to main content
Click here to Skip to main content

Avoiding Overflow, Underflow, and Loss of Precision

, 26 Jun 2014
Rate this:
Please Sign up or sign in to vote.
Describes why the most obvious way of evaluating functions may be bad and how to do better


The most obvious way of evaluating mathematical functions might not work. For example, how would you calculate log(x+1)/x? The most obvious approach would be to add 1 to x, take the log, then divide by x. For some values of x, that will produce an accurate result. For other values, it will be off by 100%.

This article gives three examples where the most direct evaluation of functions can fail. Along the way, it gives four general rules for accurately evaluating functions.

The code included with the article applies the principles in the article to evaluating nine functions that often come up in statistics.

Example 1: Quadratic Equation

Suppose you want to find the roots of 3x2 + 109x + 5 = 0. This will be simple: just apply the quadratic equation. What could go wrong? Let's see.

double a = 3.0, b = 1.0e9, c = 5.0;
double d = b*b - 4.0*a*c;
double r1 = (-b - sqrt(d))/(2.0*a);
double r2 = (-b + sqrt(d))/(2.0*a);

This code correctly computes r1 to all displayed digits but calculates r2 as 0 even though the correct value to machine precision is -4.5e-8. The absolute error in computing r2 is small but the relative error is 100%. Why is one root computed accurately and the other not?

The cardinal rule in numerical analysis is to avoid subtracting nearly equal numbers. The more nearly equal two numbers are, the more precision is lost in the subtraction. Since b is so much larger than a and c, the variable sqrt(d) is very nearly the same as b. The variables b and sqrt(d) agree to 12 decimal places, so about 12 decimal places are lost in the subtraction. How do we compute r2 more accurately?

Rule 1: Use algebraic trickery to avoid loss of precision.

There's an equivalent but much less familiar form of the quadratic equation. Multiply the numerator and denominator of the standard form by the numerator with the ± sign turned upside down and simplify. This new form adds b and sqrt(d) in the denominator, avoiding the problem of subtracting nearly equal numbers. The revised code below is completely accurate:

double a = 3.0, b = 1.0e9, c = 5.0;
double d = b*b - 4.0*a*c;
double r1 = (-b - sqrt(d))/(2.0*a);
double r2 = -2.0*c/(b + sqrt(d));

There's still some inaccuracy since d equals b*b in machine precision, but we've improved our error rate from 100% down to 10%. For more accuracy, abandon the quadratic equation and use a root finding algorithm.

Along these same lines, see Comparing three methods of computing standard deviation for numerical examples showing how three algebraically equivalent formulas can have very different behavior. A similar phenomena happens when fitting a line to data.

Example 2: Ratios of Factorials

Probability calculations often involve taking the ratio of very big numbers to produce a moderate-sized number. The final result may fit inside a double with room to spare, but the intermediate results would overflow. For example, suppose you need to calculate the number of ways to select 10 objects from a set of 200. This is 200!/(190! 10!), about 2.2 e16. But 200! and 190! would overflow the range of a double.

There are two ways around this problem. Both use the following rule.

Rule 2: Use algebraic trickery to avoid overflow.

The first trick is to recognize that 200! = 200*199*198*...*191*190! and so 200!/(190! 10!) = 200*199*198*...*191/10!. This certainly works, but it's limited to factorials. A more general technique is to use logarithms to avoid overflow: take the logarithm of the expression you want to evaluate and then exponentiate the result. In this example, log( 200!/(190! 10!) ) = log(200!) - log(190!) - log(10!). If you have code that calculates the logarithm of factorials directly without calculating factorials first, you could use it to find the logarithm of the result you want, then apply the exp function. For more on this example, see Five tips for floating point computing.

Example 3: log(1 + x)

Now let's look at the example of computing log(x+1) mentioned in the introduction. Consider the following code:

double x = 1e-16;
double y = log(1 + x)/x;

This code sets y to 0, even though the correct value equals 1 to machine precision. What went wrong?

Double precision numbers are accurate to about 15 decimal places, so 1 + x equals 1 to machine precision. The log of 1 is zero, so y is set to zero. But for small values of x, log(1 + x) is approximately x, and so log(1+x)/x is approximately 1. That means the code above for computing log(1 + x)/x returns a result with 100% relative error. If x is not so small that 1 + x equals 1 in the machine, we can still have problems. If x is moderately small, the bits in x are not totally lost in computing 1 + x, but some of them are. The closer x is to 0, the more bits are lost.

This problem is similar to the problem with the quadratic equation, so we might try a similar solution. Unfortunately, algebraic trickery won't help. So we try our second rule.

Rule 3: Use analytical approximations to avoid loss of precision.

The numerical analysts favorite form of approximation is power series. The power series for log(1 + x) is x + x2/2! + x3/3! + ... For small values of x, simply returning x for log(1 + x) is an improvement. This would work well for the smallest values of x, but for some not-so-small values, this will not be accurate enough, but neither will directly computing log(1 + x). The sample code included with this article uses a rational approximation for small values of x and direct calculation for larger values.

Example 4: Inverse Logit

Next let's look at computing f(x) = ex/(1 + ex). (Statisticians call this the "inverse logit" function because it is the inverse of the function they call the "logit" function.) The most direct approach would be to compute exp(x)/(1 + exp(x)). Let's see where that can break down.

double x = 1000;
double t = exp(x);
double y = t/(1.0 + t);

Printing out y gives -1.#IND. This is because the calculation for t overflowed, producing a number larger than could fit in a double. But we can figure out the value of y easily. If t is so big that we can't store it, then 1 + t is essentially the same as t and the ratio is very nearly 1. This suggests we find out which values of x are so big that f(x) will equal 1 to machine precision, then just return 1 for those values of x to avoid the possibility of overflow. This is our third rule.

Rule 4: Don't calculate a result you can accurately predict.

The header file float.h has a constant DBL_EPSILON which is the smallest double precision that we can add to 1 without getting 1 back. A little algebra shows that if x is bigger than -log(DBL_EPSILON), then f(x) will equal 1 to machine precision. So here's a code snippet for calculating f(x) for large values of x:

const double x_max = -log(DBL_EPSILON);
if (x > x_max) return 1.0;

Using the Code

The code provided with this article calculates seven functions that come up in statistics. Each avoids problems of overflow, underflow, or loss of precision that could occur for large negative arguments, large positive arguments, or arguments near zero.

  • LogOnePlusX computes log(1 + x) as in Example 3
  • ExpMinusOne computes ex-1
  • Logit computes log(x/(1-x))
  • LogitInverse computes ex/(1 + ex) as discussed in Example 4
  • LogLogitInverse computes log(ex/(1 + ex))
  • LogitInverseDifference computes LogitInverse(x) - LogitInverse(y)
  • LogOnePlusExpX computes log(1 + exp(x))
  • ComplementaryLogLog computes log( -log(1 - x) )
  • ComplementaryLogLogInverse computes 1.0 - exp(-exp(x))

Points of Interest

The solutions presented here look unnecessary or even wrong at first. If this kind of code isn't well commented, someone will come through and "simplify" it incorrectly. They'll be proud of all the unnecessary clutter they removed. And if they do not test extreme values, their new code will appear to work correctly. The wrong answers and NaNs will only appear later.


  • 16th April, 2008: Initial post
  • 18th April, 2008: Revised article to clarify a couple of points
  • 9th October, 2008: Added links to further examples
  • 29th October, 2008: Added examples


This article, along with any associated source code and files, is licensed under A Public Domain dedication

About the Author

John D. Cook

United States United States
I am an independent consultant in software development and applied mathematics. I help companies learn from their data to make better decisions.
Check out my blog or send me a note.


Follow on   Twitter   Google+

Comments and Discussions

Questionmain PinmemberRoger6528-Jun-14 7:00 
AnswerRe: main PinmemberAnsgar Hellwig30-Jun-14 3:58 
GeneralRe: main PinmemberRoger6530-Jun-14 5:44 
GeneralRe: main PinmemberAnsgar Hellwig3-Jul-14 23:25 
GeneralNumerics PinmemberGünther M. FOIDL10-Oct-08 12:16 
Thanks for being one of the few people that do numerics and make one remember the basics.
Kind regards
GeneralRelated example: binomial probabilities PinmemberJohn D. Cook25-Apr-08 2:27 
GeneralNicely written... PinmemberDrew Stainton16-Apr-08 5:40 
GeneralRe: Nicely written... Pinmemberwtwhite21-Apr-08 18:13 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.

| Advertise | Privacy | Mobile
Web03 | 2.8.140709.1 | Last Updated 26 Jun 2014
Article Copyright 2008 by John D. Cook
Everything else Copyright © CodeProject, 1999-2014
Terms of Service
Layout: fixed | fluid