A
float
has only about 7 decimal digits of
precision. So for example given
#include <stdio.h>
int main()
{
float f1 = 3445678336.0;
float f2 = 3445678337.0;
float result = f2 - f1;
printf("%f\n", result);
}
The result is not 1, as expected, but 0, since, as a float, the values of f1 and f2 are indistinct. For better precision, use a
double
, which will give you about 15 decimal digits of precision.
You should prefer
double
to
float
in most cases for any new development, only using a
float
if you need to interface with a library or a data source that is expecting/providing a float.