Click here to Skip to main content
15,881,757 members
Articles / Multimedia / OpenGL
Tip/Trick

Calculating the Pi Number

Rate me:
Please Sign up or sign in to vote.
4.62/5 (5 votes)
13 May 2020CPOL5 min read 22K   183   6   20
Calculating the pi number faster using a simple formula
Research for increasing the accuracy of the Leibniz formula for calculating the PI number and draw graphs

Introduction

There are many interesting ways to calculate the pi number: geometric constructions, natural experiments using random numbers, as well as a huge number of different formulas from simple to complex.

These methods are well researched and their characteristics are known, how much time and computing resources are needed to solve them.

It is always interesting to improve an existing method so that it works faster and easier.

Leibniz Formula

Basic Principles

The formula Madhava - Gregory - Leibniz is a simple sum of fractions of one divided by odd number from 1 to infinity. The sign of a fraction is changed at every step. For example, take the first number 1, the second number 1/3 with a minus sign, the third 1/5 with a plus sign, etc. It turns out the expression 1 - 1/3 + 1/5 - 1/7 + 1/9 - ...

For an infinite number of terms, this sum is equal to the fourth part of pi. To get pi, we need to multiply the result by 4.

Using the Code

The simple program in C for calculating pi value:

C++
double pi_4 = 0;
int n = 100;
int sign = 1;
int i = 0;

for (i = 1; i < n; i += 2)
{
    if (sign)
    {
        pi_4 += 1.0 / i;
        sign = 0;
    }
    else
    {
        pi_4 -= 1.0 / i;
        sign = 1;
    }
}

printf("PI = %.12f\n", pi_4 * 4);

In the for loop, the variable i changes from 1 to 99 by step of 2, that is, it takes values 1, 3, 5, 7, 9, etc. The value of n is greater by 1 than the maximum value of i in the loop. The variable sign changes the value at each step, and checking its value in the if condition, we determine to add or subtract the value (1 / i) to the result. The resulting value is multiplied by 4.

For a visual representation of the calculation process, we draw a graph for the Leibniz formula using the OpenGL library. Put the loop in the display function.

C++
glBegin(GL_LINE_STRIP);
glColor3f(1, 1, 1);
for (i = 1; i < n; i += 2)
{
    if (sign)
    {
        pi_4 += s / i;
        sign = 0;
    }
    else
    {
        pi_4 -= s / i;
        sign = 1;
    }
    glVertex3f(i, pi_4 * 4, 0);
}
glColor3f(1, 0, 1);
glVertex3f(n, pi_4 * 4 + 2. / n, 0);
glEnd();

We get the graph:

Image 1

At each step, we get a value (white color) far from the pi line (blue color). To get the exact value, we need to continue the endless calculation process. How to get the result? It all depends on the goal.

Minimization Task

Our goal is to minimize deviation (red color) from the pi line (blue color) with a limited number of steps. At each step, the white line passes through the pi line. Half of the segment (red color) is closer to the line than its edges (white color). Thus, at each step, we get the maximum deviation (white color) from the line. This result is inverse to our goal of minimizing deviation.

Image 2

Indian mathematician Madhava used correction terms to improve accuracy. We find the correcting terms empirically. Check the numerical value of the result for i from 1 to 99 (for n = 100) and compare it with the exact value of the pi number. For verification, use the value 4 arctan(1).

C++
double pi = pi_4 * 4;
printf("PI = %.12f\n", pi);
printf("PI = %.12f\n", atan(1) * 4);

PI = 3.121594652591

PI = 3.141592653590

Please note that the numbers differ in a few digits. The first difference is (2 / n). Recall that n is an even number greater by 1 than the maximum value of i in the loop. Since the last term was with a minus sign, we add (2 / n).

C++
pi = pi_4 * 4 + 2.0 / n;

Now the numbers will be like this:

PI = 3.141594652591

PI = 3.141592653590

The second difference is (2 / n3) or (2 / (n * n * n)). Now subtract (2 / n3) from the sum.

C++
pi = pi_4 * 4 + 2.0 / n - 2.0 / (n * n * n);

Let's check what this is.

PI = 3.141592652591

PI = 3.141592653590

At least an accuracy greater than (1 / n3) is obtained, in contrast to the initial accuracy less than (1 / n).

Conclusion and Points of Interest

At first, I used very large values of n = 1,000,000,000 and waited a few seconds before getting the result. Then I divided the calculations into ten parallel threads and reduced n. Each thread gave its own amount, and then with a small n, I noticed that most of the digits match, with the exception of a few. I looked at the results at n = 100, 1000 and 10000 and found a (2 / n) difference. Upon careful examination, a second difference of (2 / n3) was found.

Of course, even with n = 100, the following differences are noticeable, but they need to be checked on exact numbers. It seems to me that two terms are enough to save computing resources. Indeed, to obtain the same accuracy by the usual method, it is necessary to calculate n4 terms.

So for the new n = 100,000,000, we get:

PI = 3.141592633590

PI = 3.141592653590

Here again, accuracy can be enhanced by the term (2 / n). Millions of terms are replaced by one. This is similar to obtaining a square wave using the sum of many harmonics of sine waves.

This study turned out to be useful and I turned to the search to find confirmation. I found a Wikipedia article called "Leibniz formula for pi".

In this article, I read that Indian mathematician Madhava used corrective terms to increase accuracy. I became convinced of the correctness of my research. I invite everyone to check the corrective terms that I have proposed. Maybe you will find new corrective terms with the help of exact calculations.

Corrective terms are useful to improve accuracy and save computing resources, especially in embedded systems and mobile devices.

It is interesting to use corrective terms in the tasks of digital signal processing.

Thanks for reading!

History

  • 14th May, 2020: Initial version

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Systems Engineer
Estonia Estonia
Engineer of Automated Data Processing and Control Systems. Graduated from Novosibirsk State Technical University (NSTU), Department of Automation and Computers.
I worked in television and developed broadcast automation systems.
I write programs in C/C++, Java, PHP, JavaScript, HTML, XML, CSS, MySQL.
I am interested in software architecture. Architecture is art. I use the style of modernism, in particular, it is cubism. Means the desire to parse the problem into the simplest modules. This solution allows you to assemble complex systems that surpass the complexity of the initial design concepts. I am the author of this programming technique.

Comments and Discussions

 
QuestionWhy not just change the sign at each step? Pin
Member 1181215217-May-20 9:22
professionalMember 1181215217-May-20 9:22 
AnswerRe: Why not just change the sign at each step? Pin
Askar Azhibaev17-May-20 14:16
Askar Azhibaev17-May-20 14:16 
GeneralRe: Why not just change the sign at each step? Pin
Member 1181215218-May-20 2:58
professionalMember 1181215218-May-20 2:58 
QuestionQuestion Pin
avisal15-May-20 6:21
professionalavisal15-May-20 6:21 
AnswerRe: Question Pin
Askar Azhibaev6-Sep-20 19:55
Askar Azhibaev6-Sep-20 19:55 
QuestionDesperately slow Pin
YvesDaoust15-May-20 2:33
YvesDaoust15-May-20 2:33 
AnswerRe: Desperately slow Pin
Askar Azhibaev15-May-20 19:57
Askar Azhibaev15-May-20 19:57 
AnswerRe: Desperately slow Pin
Philippe Verdy4-Jun-20 13:04
Philippe Verdy4-Jun-20 13:04 
GeneralRe: Desperately slow Pin
YDaoust4-Jun-20 20:27
YDaoust4-Jun-20 20:27 
GeneralRe: Desperately slow Pin
Philippe Verdy10-Oct-21 11:33
Philippe Verdy10-Oct-21 11:33 
The Taylor development of arctangeant can be used with smaller rational angles (with x=1/10^k):

Arctan(x) = x - x^3/3 + ... + (-1)^n x^(2n+1)/(2n+1) + o(x^(2n+1))

When you do that, the convergence speed depends only on the choice of k. There's a residual difference that can also be expressed as a linear combination of arctan(x) for x much lower than 1/10^k (so with an even faster convergence: above some point, it no longer contributes at all to the precision of the result, and this residue is much faster to compute than the development of Arctan(1/10^k).

And with small values of Arctan(x) this nearly the same as x=1/10^k with a small error.

(1/10^k) is very interesting because yuou can compute directly in base 10 without needing any final conversion from base 2 to base 10 on a very large number (which rapidely gets costly and even costlier than computing the base-2 number alone)

What is best is that the linear combination can be parallelized a lot (so you can easily distribute the computation across multiple computers on a network).

The limited Taylor development of elliptical integral functions also offer other alternative to compute the integer factors of the linear combination of these functions, because they have less terms.

Today the current record is about 50 trillion decimals on a single server with local storage. But today's technology with networks of computers should extend this limit rapidly to about 50 quintillion decimals (the problem is not the speed, but the amount of storage, but as you can distribute the workload, you can also distribute the storage on a very vast network not limited to terabytes or petabytes, you could as well manage yottabytes if you can rent this storage somewhere... but the cost will be high, for the electric energy spent, for the network bandwidth needed to interconnect the computers, and for the array of computers).

So do we really need so many decimals of pi? I think there's better use of these supercomputers, for real similations with practical applications (fluid mechanic, nuclear research, cosmology, medecine and biochemistry, genetics, IA treatment of "big data" collections to find correlations, behavioral analysis, analysis of human language, improving translations, infering emotions in human faces, computer aided animations, optimizing hardware designs for travelsman-like problem, financial applications and risk evaluation, and anything that needs to create more accurate predictions for longer terms...)

Note that today the most limiting factor for complex algorithms is no longer their time complexity, but their space complexity: time can always be reduced using parallelism (and smart algorithms, including IA and "machine learning" using massive parallelism, or redistribution of data using transforms like FFT), but that comes at the price of the storage space needed: and we know that this storage space is finite (and will never expand fast: we've almost reached now the limits of integration where quantic effects can no longer be ignored as insignificant, and where error correction is already needed, and critical now for marchine learning and quantum computing, where error correction is not only needed but is very costly in terms of space complexity; parallelism reaches the fundamental limits of physics and our reachable and "visible" universe is finite).

So we need now to trade between parallelism (limited by the needed space which is very hard to reduce) and time (where there are still improvements, provided we develop new algorithms... and make use of serious maths!).

If we need to develop really secure algorithms for the web, we should select those that have the largest space complexity (this also includes applications like crytocurrencies and blockchains and their "proof of work", currently based on time compelxity but this is a flaw!). I do think that the RSA algorithm (based on time complexity for factoring large integers) is also flawed: we need better algorithms based on space complexity (so cryptocurrencies, or blockchains in general, should be based on "proof of owned space, which will finally bound to their real cost of acquisition, making cryptocurrencies more viable as real values with a trustable economic value).

What will be the most efficient algorithms? Those that are the most efficient in terms of storage space. Instead on focusing on pure time performance of computing algorithms, we'll focus on the design of data structures and the topology of their distribution. Then we'll build a network architecture supporting these structures while using search trees with sufficient width/degrees per node but arbitrary number of nodes (and connecting them in a smart topology that correctly distributes the search path lengths).

And this will be true for algorithms trying to compute numerical values (e.g. Taylor developments): it's time to reformulate the problems using spatial transforms. You'll see then that Fourier-like transforms will always be helpful and will win over all other strategies (e.g. computing a basic multiplication of two numbers with an arbitrarily large count "n" of digits can be performed in linear time O(n) but at a cost of exponential storage space).

The future of compujting will not be centralized on single host: the computer is now the whole web and everything that can connect to it and perform some computing. There's a huge future for distributed computing (and real IA that an surpass the human intelligence will be the intelligence of the whole WWW seen as a single computer, and harnessing the individual computing capability not just of computers connected to it, but also the capabilities of humans using these device, even if they make some errors, that error correction algorithms will fix automatically and more reliably than everything a single human or a single computer would do today!) As well there are new computing capabilities we can find and use directly in the nature (notably in biology, but as well in other existing structures of our accessible universe at all scales): nature is not only intelligent, it is creative, resistant, and widely reproducible (with some variations we can take into account), and it is where we perceive the "magic" presence, created by its proliferation. A good way to measure it is called "entropy" (which can be all but is certainly not "random" as it is governed by many strong rules).
QuestionWhy not grouping two steps, instead of testing sign in alternateing branches? Pin
Philippe Verdy4-Jun-20 19:36
Philippe Verdy4-Jun-20 19:36 
AnswerRe: Why not grouping two steps, instead of testing sign in alternateing branches? Pin
Askar Azhibaev15-May-20 18:34
Askar Azhibaev15-May-20 18:34 
GeneralRe: Why not grouping two steps, instead of testing sign in alternateing branches? Pin
Philippe Verdy16-May-20 3:20
Philippe Verdy16-May-20 3:20 
GeneralRe: Why not grouping two steps, instead of testing sign in alternateing branches? Pin
Philippe Verdy16-May-20 3:42
Philippe Verdy16-May-20 3:42 
GeneralRe: Why not grouping two steps, instead of testing sign in alternating branches? Pin
Philippe Verdy4-Jun-20 12:34
Philippe Verdy4-Jun-20 12:34 
QuestionCalculating Pi Pin
Bob100015-May-20 1:32
professionalBob100015-May-20 1:32 
AnswerRe: Calculating Pi Pin
Askar Azhibaev15-May-20 18:34
Askar Azhibaev15-May-20 18:34 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.