Click here to Skip to main content
12,244,061 members (50,611 online)
Click here to Skip to main content
Add your own
alternative version

Tagged as


39 bookmarked

Filling in the Gaps: Simple Interpolation

, 10 Dec 2015 BSD
Rate this:
Please Sign up or sign in to vote.
An introduction to interpolation and inverse interpolation


Sometimes you don't have data where you need it. Maybe you have figures collected sporadically but you need to present them as if they were collected at regular intervals, say once a week. Or maybe you need to make your best guess at missing data points. This is a job for interpolation.

Sometimes instead of wanting to guess at output values for inputs that you choose, you want to do the opposite: You have an output in mind and you need to guess what input would give you the output that you want. This calls for inverse interpolation.

This article presents the simplest and most widely applicable forms of interpolation — linear interpolation and quadratic interpolation. The article concludes with some recommendations for what to try next when these forms of interpolation are not good enough.

Linear Interpolation

There are many ways to fill in missing data, ranging from very simple to very sophisticated. Linear interpolation is the simplest method. It is also one of the most robust methods, i.e. it is likely to give reasonable answers under a wide variety of circumstances, including when substantial noise is added to the inputs. Unfortunately linear interpolation is also the least accurate method. However, accuracy is over-rated. Or at least robustness is under-rated. Robustness is often more important than extra accuracy, especially if you're not exactly confident that you know what you're doing. I would give the same advice for interpolation that the Extreme Programming folks give for software development: first try the simplest thing that could possibly work.

If you have two inputs, x0 and x1, and two corresponding outputs y0 and y1, the equation of the line connecting (x0, y0) and (x1, y1) is the following:

y = y<sub>0</sub>(x - x<sub>1</sub>)/(x<sub>0</sub> - x<sub>1</sub>) + y<sub>1</sub>(x - x<sub>0</sub>)/(x<sub>1</sub> - x<sub>0</sub>)

With this formula, you can stick in any value of x you want and get out a new value of y. So if you have a value x2 and you want to guess its corresponding output y2, then you have this equation:

y<sub>2</sub> = y<sub>0</sub>(x<sub>2</sub> - x<sub>1</sub>)/(x<sub>0</sub> - x<sub>1</sub>) + y<sub>1</sub>(x<sub>2</sub> - x<sub>0</sub>)/(x<sub>1</sub> - x<sub>0</sub>)

Linear interpolation is most reliable if the x you stick in is between the values x0 and x1. If x is not between these two values, technically you are extrapolating rather than interpolating. This still works as long as the new value of x isn't too far from x0 and x1. The further the new x value is from the input values used to specify the line, the more suspicious you should be of your output.

Now suppose you have the data points (x0, y0) and (x1, y1), but instead of trying to predict a new y value you want to predict a new x value. That is, you have a y value you're trying to get out and you want to guess what input x would give you that output. Then you can reverse the roles of x and y in the equation above and get the following:

x<sub>2</sub> = x<sub>0</sub>(y<sub>2</sub> - y<sub>1</sub>)/(y<sub>0</sub> - y<sub>1</sub>) + x<sub>1</sub>(y<sub>2</sub> - y<sub>0</sub>)/(y<sub>1</sub> - y<sub>0</sub>)

As before, this works best if the new value y2 is between the previous y values y0 and y1. If it's not between these values but close to one of them, the result is likely to be useful. The further out y gets, the more suspicious you should be of the result.

The formulas above are mathematically correct for any values you stick in. When I say that you should be suspicious of the output in extreme circumstances, it's not because some approximation is going on. However, in practice, the assumption that the points (x0, y0) and (x1, y1) can be used to accurately predict the point (x2, y2) may not hold when x2 or y2 are far from their predecessors.

When you write code to do linear interpolation, the only thing to be careful about is input validation. For (ordinary) interpolation, it's important to verify that x1 does not equal x2 so that the interpolation function doesn't divide by zero. Given arrays of x and y values, the following code fits a straight line to (x[0], y[0]) and (x[1], y[1]) and uses this line to predict a y[2] value for x[2].

if (x[0] == x[1] || x[0] == x[2] || x[1] == x[2])
	// report error
    y[2] = y[0]*(x[2] - x[1])/(x[0] - x[1]) + y[1]*(x[2] - x[0])/(x[1] - x[0]);

Similarly, for inverse interpolation you need to make sure your y values are distinct so that you don't divide by zero. The following uses a straight fitted line to (x[0], y[0]) and (x[1], y[1]) to predict the x[2] value for y[2].

if (y[0] == y[1] || y[0] == y[2] || y[1] == y[2])
    // report error
    x[2] = x[0]*(y[2] - y[1])/(y[0] - y[1]) + x[1]*(y[2] - y[0])/(y[1] - y[0]);

If you just need to do a quick interpolation on a small amount of data rather than write a program that does interpolation, you may want to use an online linear interpolation calculator. This interpolator is implemented in hand-written client-side JavaScript and so you can read the source.

Quadratic Interpolation

As the Extreme Programming folks would recommend, when the simplest thing doesn't work, try the next simplest thing that could possibly work. For interpolation, that means quadratic interpolation. Instead of fitting a straight line to two points, quadratic interpolation fits a parabola to three points. To figure out how to generalize the formulas above to quadratics, look back at the equation for linear interpolation. The term (x - x1)/(x0 - x1) is 1 when x = x0 and 0 when x = x1. Therefore the term y0(x - x1)/(x0 - x1) is y0 at x0 and 0 at x1. Similarly, (x - x0)/(x1 - x0) is 1 when x = x1 and 0 when x = x0, and so y1(x - x0)/(x1 - x0) is y1 at x1 and 0 at x0.

For quadratic interpolation, we follow a similar pattern, constructing quadratic polynomials that are 1 at one of the given xs and 0 at the others. Then when we multiply by the corresponding y values and add up terms, the result has the necessary y values for each x. So the quadratic polynomial fitting the points (x0, y0), (x1, y0), and (x2, y2) is:

y<sub>0</sub>P<sub>0</sub>(x) + y<sub>1</sub>P<sub>1</sub>(x) + y<sub>2</sub>P<sub>0</sub>(x)


  • P<sub>0</sub>(x) = (x - x<sub>1</sub>)(x - x<sub>2</sub>)/((x<sub>0</sub> - x<sub>1</sub>)(x<sub>0</sub> - x<sub>2</sub>))
  • P<sub>1</sub>(x) = (x - x<sub>0</sub>)(x - x<sub>2</sub>)/((x<sub>1</sub> - x<sub>0</sub>)(x<sub>1</sub> - x<sub>2</sub>))
  • P<sub>2</sub>(x) = (x - x<sub>0</sub>)(x - x<sub>1</sub>)/((x<sub>2</sub> - x<sub>0</sub>)(x<sub>2</sub> - x<sub>1</sub>))

The polynomials Pi above are called "Lagrange" polynomials. For cubic and higher interpolation, the pattern is the same: first construct polynomials that are 1 at one of the xs and 0 at all other xs, then multiply each by the corresponding y values and sum.

If your inputs are free of noise, quadratic interpolation can give much better accuracy than linear interpolation. For example, in mathematical tables, the given values are precise to many decimal places, but you may be interested in a value not in the table. Say a function is tabulated at 0.1, 0.2 and 0.3 but you want to know its value at 0.17. You could probably get much better accuracy using a parabola to fit all three points rather than using a line to just fit the first two points. On the other hand, if your inputs have a substantial amount of error, some sort of random noise, then quadratic interpolation could magnify that noise by over-reacting to the noise.

To implement the above equations in software, we need to verify that all the x values are distinct in order to avoid dividing by zero.

if (x[0] == x[1] || x[0] == x[2] || x[1] == x[2] || x[0] == x[3] || 
	x[1] == x[3] || x[2] == x[3])
	// report error;
    y[3]  = y[0]*(x[3] - x[1])*(x[3] - x[2])/((x[0] - x[1])*(x[0] - x[2]));
    y[3] += y[1]*(x[3] - x[0])*(x[3] - x[2])/((x[1] - x[0])*(x[1] - x[2]));
    y[3] += y[2]*(x[3] - x[0])*(x[3] - x[1])/((x[2] - x[0])*(x[2] - x[1]));

As with the linear interpolator, there is an online quadratic interpolation calculator implemented with client-side JavaScript.

Now what if we want to use interpolation to solve for an x value? Say we have three points (x0, y0), (x1, y0), and (x2, y2) and we want to solve for the x3 value corresponding to a given y3? We could fit a polynomial to the first three points exactly as above and solve for x. That would generally not be a good idea. Instead, we reverse the roles of x and y and imagine y as the independent variable.

Note that this is where quadratic interpolation differs from linear interpolation. With linear interpolation, reversing the roles of x and y is the same as fitting first as a function of x and then solving for a missing x. With quadratic interpolation, the analogous steps are not the same. Fitting the points as a function of x first then solving the resulting equation would amount to finding the roots of a quadratic equation. This might not be possible, or it might give two different answers. Even if it gives a single answer, that answer might amplify errors in the data. On the other hand, by simply reversing the roles of x and y, we have a simple, well-behaved solution.

What If Linear and Quadratic Interpolation Aren't Good Enough?

In theory, you could fit higher order polynomials. You could fit a third degree polynomial to four points, or a fourth degree polynomial to give points, etc. This is generally not a good idea. Fitting higher degree polynomials amplifies errors in the data.

If quadratic interpolation isn't good enough, you may need some more sophisticated form of interpolation. For example, natural cubic splines are useful in many contexts. However, it's hard to say much in general. Beyond simple linear or quadratic interpolation, the best technique depends heavily on the problem context. You may need to abandon interpolation entirely and use a DSP algorithm, such as a low-pass filter, or do some sort of statistical regression. A good place to start looking for more information would be the Numerical Recipes book.


  • 11th September, 2008: Initial post
  • 8th October, 2008: Rewritten to include quadratic interpolation and code samples
  • 10th December, 2015: Typo corrected


This article, along with any associated source code and files, is licensed under The BSD License


About the Author

John D. Cook
Singular Value Consulting
United States United States
I am an independent consultant in software development and applied mathematics. I help companies learn from their data to make better decisions.

Check out my blog or send me a note.


You may also be interested in...

Comments and Discussions

BugYou have a typo. Pin
Member 1219917310-Dec-15 7:37
memberMember 1219917310-Dec-15 7:37 
GeneralRe: You have a typo. Pin
John D. Cook10-Dec-15 7:47
memberJohn D. Cook10-Dec-15 7:47 
GeneralMy vote of 4 Pin
Member 1042810316-Feb-15 9:19
professionalMember 1042810316-Feb-15 9:19 
Generalquery Pin
thomee2-Dec-08 1:19
memberthomee2-Dec-08 1:19 
GeneralAfter linear comes cubic Pin
ton.juta15-Sep-08 22:57
memberton.juta15-Sep-08 22:57 
GeneralSource Code Pin
Tony Bermudez11-Sep-08 11:18
memberTony Bermudez11-Sep-08 11:18 
AnswerRe: Source Code Pin
John D. Cook11-Sep-08 11:35
memberJohn D. Cook11-Sep-08 11:35 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.

| Advertise | Privacy | Terms of Use | Mobile
Web01 | 2.8.160426.1 | Last Updated 10 Dec 2015
Article Copyright 2008 by John D. Cook
Everything else Copyright © CodeProject, 1999-2016
Layout: fixed | fluid