The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
That wouldn't make pi a rational number, though. It would be misleading to provide pi in a class named something like rational_number - unless you called it not pi, but pi_approximated_as_a_rational_number.
A (true) story about pi, I believe it is from around 1980 - I was told the story around 1983 by a guy who had participated in the hunt for The True pi:
At Bergen University, Norway, one professor teaching numerical methods and error propagation had his students estimate the error expected in some trancendental functions in difficult number ranges, and verify it on the University's new shining IBM 3080 mainframe. The students came back and reported significantly larger errors than their estimates suggested. The surprised professor set out to find the cause of this.
It turned out that the IBM 3080 Fortran libraries were carbon copies of the 370 libraries. Which were carbon copies of 360 libraries. The 360 got its libraries (in assembler format, of course, with floating point constants in hexadecimal format) as an adaptation of the old 7090 libraries - machines with a different instruction set and 36 bit word length (rather the 360's 32 bits). Calculating a binary representation of pi anew would have had to be set up as a separate job. They didn't do that; they just chopped 4 bits off the 7090 binary floating point mantissa for the pi value. Ignoring rounding. So the least significant bit, which should have been rounded up to a 1, remained a 0. The professor's theoretical error estimates were based on a properly rounded, not a truncated pi value.
This truncated 7090-binary pi value from the end of the 1950s was interited all the way up to the 3080 series, more than 20 years later. When discovered, and the least significant bit rounded up to a 1, the theoretical error estimates matched the observed errors more or less perfectly.
Still wouldn't help electrical engineers whose work is based on e and I. That said there are a lot of good reasons to support some sort of base 10 fractional numbers to avoid the rounding errors that result from converting base 2 fractionals to base 10 fractionals.
You may also consider a continued fraction representation which can also be calculated with, and if you allow them to be "generators" they can even represent some annoying numbers such as e and phi and irrational square roots.
But this also has various problems. For example, when subtracting "accidentally equal" numbers (ie two generators that are not the same instance and not binary equivalent, but generate the same list) it takes infinite steps to find out that there is never a difference, and this delays the generation of the "head" of the result generator infinitely long (ie there is no answer).
Working with finite CFs necessarily means that some numbers cannot be represented any more since they get truncated at some point.
By the way, floating point arithmetic is more exact than many people give it credit for. It doesn't always have to round, for example subtracting two numbers with equal sign that are within a factor of 2 of each other results in the actual difference between the two numbers (Sterbenz lemma). A large part of that perception is the great Decimal Bias of the modern world (at other times in history it would have been a Sexagesimal Bias), but there is nothing inherent about a base ten system.
When we were CompSci students around 1980, ready-made packages were not readily available, so a group of us created an arbitrary precision arithmetic fortram package for a friend in theoretical physics:
This fellow was working on an analythical model to describe, at the micro level, what happens when two wave fronts collide head on. When applying this model in a simulation, even the Univac 1100 double precision 72 bit floating point causd discontinuities - more or less perfect square wave forms, which none of us really believed existed in the physical world, certainly not in liquids... So we gave him a library where we set up buffers for 200 decimal digits. (For character I/O, BCD was much better suited than a 600+ bits pure binary value - speed of calculation was not of any importance.)
Even with 600+ bits of precision, he experienced too large discontinuities. Then we realized that he was adding up elements of a series expansions from the head of the series, from the big towards the smaller elements. This lead us into a huge argument with him: As I said, he was in theoretical physics, and simply refused to accept that the order of adding together series elements would make a difference. We were several guys, over a period of many days, trying to explain, each in our own way, why "in theory, therory and practice are identical, but in practice they are not". Finally, he gave in and turned the addition around the other way. The waves came out so smooth that we silently suspected that the Univac 72 bits format would have been sufficient, if had just, from the very beginning, added the smallest elements first.
There was an intense discussion in academia around that time whether Computer Science is a science requiring its own scientists to do computer related jobs, or if every engineer, mathmatician or physicist should learn to handle a computer himself. I several times used this story to argue that scientists getting close to a computer must learn enough about them to avoid silly problems caused by poor understanding of how computers operate.
First, in quite a few applications, that doesn't matter. Whether the calculations are done in 10 or 20 milliseconds doesn't matter if you prepare a value that is displayed in an interactive dialog.
Second, if arithmetic calculations is only a small part of the handling, other formats than the traditional floating point may be faster in the other operations. Take an archetypical Cobol application: It reads in, moves around and displays values in counts and dollars and cents; every now and then doing a multiplication, but mostly just addition and subtraction. The time saved in I/O from a BCD format compared to a float format greatly offsets the slower BCD addition/subtraction.
Third: If you use rationals to avoid rounding errors, your float option might be an arbitrary precision float package. Adding 1/3 + 1/6 as rationals is likely to be much faster than calling a library function maintaining a list of mantiass elemnets to add 0.333333.... and 0.16666... for, say, 512 bits.
With modern language I think you can implement it yourself. All C# variable types derive from System.Object anyway. And then you can override arithmatic operator for this new type. I can sort of picture it, storing each value as two integer : numerator and denominator.
I wouldn't hesitate for a second to give that as the exercise of the week to second year programming students, implmenting the four basic arithmetic operations for rationals.
There is only a small problem: If the denominators are not identical in addition/subtraction, you can trivially find a common denominator by multiplying the two, but in the genral case, it is not the least common denominator. If you use 64 bit integers for the denominator you might delay handling this until the results are to be used (at least for small student level applications), but at some stage you will have to factorize the numerator and denominator, and remove common factors. In principle this is not difficult as long as the problem is of "student size", but Eratosthenes' sieve is not very efficient (certainly not in space!) - you wouldn't want to employ that for every addition or subtraction!
Factors of 2 are easy to get rid of: While both (numerator AND 1) and (denominator ANd 1) are nonzero, shift both right by one bit. This you can do for every basic arithmetic operation (or complex expression). Other factors are worse. If you find it acceptable to do a whole series of division/remainder operations (trying with all prime factors up to the square root of the smaller of numerator and denominator), then the space required is roughly limited to a single read-only table of known primes, but you'll certainly be keeping the division circuitry of the CPU hot .
My recommendation would be to delay the factorization until it is needed for presentation or conversion to other formats. Viewing/interpreting the values through a debugger would be a next to impossible, and you should implement an exception handler removing common factors if an operation would cause the numerator/denominator to overflow the integer format - maybe doing a 2-factor removal as a fast first try, and going on to a series of division operations only if the values are still above a certain overflow risk threshold. If you use 64 bit ints, you might in the worst case have to do divisions for every prime less than 4G; that is quite some operation...
How did you handle the (least common) denominator problem? Or did you simply ignore it, ignoring the risk that it might overflow? How did you present the results - as if they were floats (through a "(float)numerator/(float)denominator" division) using float output format? Did you do anything to detect common denominators, or did 1/3 + 1/3 end up as 6/9?
I don't remember the details because it was such a long time ago but I distinctly remember that I did determine the least common denominator for fractions and ensured that all rational numbers were stored in their reduced form.
While a rational number type has both advantages and disadvantages and we can discuss that forever, it is an undeniable truth that JSON should have had a rational number type, simply because some languages or libraries already support rational numbers and there should exist a portable way for applications using rational numbers to serialize and exchange these with other applications. A rational number that is within the range of a floating point number type can always be "downgraded" to that floating point number type, but the converse is problematic. For example, is the JSON expression [0.67,0.33] an array with the numbers 2/3 and 1/3 ? If not, then what about [0.6666666666666667,0.3333333333333333]? It would have been clear what was meant if we were allowed to write [2/3,1/3].
Last Visit: 31-Dec-99 18:00 Last Update: 19-Apr-21 3:20