floating point numbers on computers can represent numbers exactly that are sums of powers of two. You can get close to an arbitrary decimal number with so many binary bits but you might not be able to get it exactly.

So, if we're aiming to represent 0.7 in decimal in binary:

0.1 (where the . is the binary point) => 1 / 2 decimal => 0.5

0.11 => 3 / 4 decimal => 0.75

0.101 => 5 / 8 decimal => 0.625

0.1011 => 11 / 16 decimal => 0.6875

0.10111 => 23 / 32 decimal => 0.71875

0.101101 => 45 / 64 decimal => 0.703125

...

0.1011001101 => 717 / 1024 decimal => 0.7001953125 (10 bit encoding)

ad nauseaum until you run out of bits. Each 3 or 4 extra bits might give you another decimal point of accuracy so by the time you get to 20 or so bits you've only got 7 or so digits of accuracy.

(someone better at maths might like to prove it, I can't! :-) )

Cheers,

Ash

15,745,973 members