I'm not sure I got you, however it seems related on the classic problem people have with
double
numbers. Computer-stored
double
numbers are not (maths)
real numbers: they are memory constrained and cannot achieve inifinite precision (see for instance
"Double-precision floating-point format" at Wikipedia[
^]). In your case, if you see (for instance with the debugger) the actual number in computer memory representing
10.37
then you could be surprised by the 'strange'
10.369999999999999
.