Add your own alternative version
Stats
279.3K views 1.8K downloads 66 bookmarked
Posted
3 Dec 1999

Comments and Discussions


The significant figures specified in the original SIgFig routine get 'lost' when the value is converted to floating point. For example, if the result is 1.2 and we have specified 4 significant figures, we need to go to extra work to correctly display this as 1.200. The FloatToText routine converts it to 1.2; %f in the format will display something like 1.2000000. Yes, we can specify the precision modifier in the printf format code, but this requires that we compute the order of magnitude of the number so that we can determine how many decimal places we need to achieve the right number of significant figures. The following modified version of SigFig correctly produces the string version of the number. It is not pretty code, but it works!
CString SigFigStr(float X, int SigFigs)
{
CString str;
if(SigFigs < 1)
{
ASSERT(FALSE);
return str;
}
int Sign;
if(X < 0.0f)
Sign = 1;
else
Sign = 1;
X = fabsf(X);
float Powers = powf(10.0f, floorf(log10f(X)) + 1.0f);
float val = Sign * Round(X / Powers, SigFigs) * Powers;
str.Format("%f", val);
str.TrimLeft();
str.TrimRight();
int end = SigFigs;
if(Sign < 0)
end++;
if(str.Find('.') != 1)
end++;
str = str.Left(end);
// Remove decimal point if nothing after it. "1234." becomes "1234"
if(str.Right(1) == ".")
str = str.Left(str.GetLength()  1);
return str;
}





// I may post all this code as an update to the main topic.
// This is 3.4 times faster than using sqrtf(...)
#define FP_BITS(fp) (*(DWORD *)&(fp))
#define FP_ABS_BITS(fp) (FP_BITS(fp)&0x7FFFFFFF)
#define FP_SIGN_BIT(fp) (FP_BITS(fp)&0x80000000)
#define FP_ONE_BITS 0x3F800000
static unsigned int fast_sqrt_table[0x10000]; // declare table of square roots
typedef union FastSqrtUnion
{
float f;
unsigned int i;
} FastSqrtUnion;
void build_sqrt_table()
{
unsigned int i;
FastSqrtUnion s;
for (i = 0; i <= 0x7FFF; i++)
{
// Build a float with the bit pattern i as mantissa
// and an exponent of 0, stored as 127
s.i = (i << 8)  (0x7F << 23);
s.f = sqrtf(s.f);
// Take the square root then strip the first 7 bits of
// the mantissa into the table
fast_sqrt_table[i + 0x8000] = (s.i & 0x7FFFFF);
// Repeat the process, this time with an exponent of 1,
// stored as 128
s.i = (i << 8)  (0x80 << 23);
s.f = sqrtf(s.f);
fast_sqrt_table[i] = (s.i & 0x7FFFFF);
}
}
inline float fastsqrt(float n)
{
if(FP_BITS(n) == 0)
return 0.0f; // check for square root of 0
FP_BITS(n) = fast_sqrt_table[(FP_BITS(n) >> 8) & 0xFFFF]  ((((FP_BITS(n)  FP_ONE_BITS) >> 1) + FP_ONE_BITS) & 0x7F800000);
return n;
}
void main(void)
{
build_sqrt_table();
float a = fastsqrt(1.234f);
}





Another Square Root Algorithm:
/*******************************************************
** square_root  single precision square root
********************************************************
** input: value to take the square root of
** output: nothing
** calls: frexp(), ldexp()
** returns: 0.0 if input value <= 0.0,
** otherwise square root of input value
********************************************************
*/
float square_root(float xx)
{
float f, x, y;
int e;
f = xx;
if (f <= 0.0)
{
return 0.0;
}
/* split mantissa and exponent */
x = frexp(f, &e); /* f = x * 2**e, 0.5 <= x < 1.0 */
/* Q  is power of 2 odd ? */
if (e & 1)
{
/* yes  double mantissa and decrement the power of 2 (exponent) */
x = x + x;
e = 1;
}
/* compute exponent power of 2 of the square root */
e >>= 1;
/* Q  is the mantissa between sqrt(2) and 2 ? */
if (x > 1.41421356237)
{
/* yes  offset mantissa, compute series */
x = x  2.0;
y =
((((( 9.8843065718E4 * x
+ 7.9479950957E4) * x
 3.5890535377E3) * x
+ 1.1028809744E2) * x
 4.4195203560E2) * x
+ 3.5355338194E1) * x
+ 1.41421356237E0;
}
/* no  Q  is the mantissa between sqrt(2)/2 and sqrt(2) ? */
else if (x > 0.707106781187)
{
/* yes  offset mantissa, compute series */
x = x  1.0;
y =
((((( 1.35199291026E2 * x
 2.26657767832E2) * x
+ 2.78720776889E2) * x
 3.89582788321E2) * x
+ 6.24811144548E2) * x
 1.25001503933E1) * x * x
+ 0.5 * x
+ 1.0;
}
else
{
/* no  mantissa is between 0.5 and sqrt(2)/2 */
x = x  0.5;
y =
((((( 3.9495006054E1 * x
+ 5.1743034569E1) * x
 4.3214437330E1) * x
+ 3.5310730460E1) * x
 3.5354581892E1) * x
+ 7.0710676017E1) * x
+ 7.07106781187E1;
}
/* calculate y = y * 2**e */
y = ldexp(y, e);
return y;
}





Your square_root() function is accurate, but is slower than sqrtf() iteself (about 3.5 times slower) :





Thanks for your response.
I'm using the square_root() function in an embedded x86 system written with the MSVC v1.52c compiler. The runtime didn't have a single precision sqrt() so this algorithm was faster for me than the runtime double precision version.
I like the table driven approach, and will investigate placing the table into ROM.
Steven J. Ackerman, Consultant
ACS, Sarasota, FL
http://www.acscontrol.com
sja@gte.net





How accurate is this? About howmany bits off is it from the real answer?
I also wonder in real applications how much performance penalty one pays for having to load a 32k integer table (128k bytes?) into cache before this function can be called. Perhaps in very tight loops it's worth it.
Thanks for the bithacks. I'm always interested.





// This is about 2.12 times faster than using 1.0f / n
// r = 1/p
#define FP_INV(r,p) \
{ \
int _i = 2 * 0x3F800000  *(int *)&(p); \
r = *(float *)&_i; \
r = r * (2.0f  (p) * r); \
}





Simon Hughes wrote:
int _i = 2 * 0x3F800000  *(int *)&(p); \
uhhh ?? what kind of magic is this ?





It's called speed voodoo
It's not as accurate, but it's damn fast.
Here are the results from a simple test:
float d = 22.0f, a = 1.0f / d, b;
FP_INV(b, d);
d = 5000.0f;
a = 1.0f / d, b;
FP_INV(b, d);
Regards,
Simon Hughes
Email: simon@hicrest.net
Web: www.hicrest.net





Simon, great job. You may wish to add a note to the source code comments that states it is an approximation function, and not necessarily a replacement for all uses of 1.0 / x.
Cheers,
Jason Doucette
http://www.jasondoucette.com/





// This is 15.5 times faster than RoundValue
__forceinline void FloatToInt(int *int_pointer, float f)
{
__asm fld f
__asm mov edx,int_pointer
__asm FRNDINT
__asm fistp dword ptr [edx];
}





Simon Hughes wrote:
// This is 15.5 times faster than RoundValue
Hmm, I didn't really get how to use it for round value,
could you give an example?
Ilia





inline int RoundValue(float param)
{
int a;
int *int_pointer = &a;
__asm fld param
__asm mov edx,int_pointer
__asm FRNDINT
__asm fistp dword ptr [edx];
return a;
}
int nVal = RoundValue(123.456f);
float fVal = 123.456f;
int nVal = RoundValue(fVal);
inline void FloatToInt(int *int_pointer, const float &f)
{
__asm fld f
__asm mov edx,int_pointer
__asm FRNDINT
__asm fistp dword ptr [edx];
}
int nVal;
float fVal = 123.456f;
FloatToInt(&nVal, fVal);
Regards,
Simon Hughes
Email: simon@hicrest.net
Web: www.hicrest.net





Simon Hughes wrote: __forceinline void FloatToInt(int *int_pointer, float f)
{
__asm fld f
__asm mov edx,int_pointer
__asm FRNDINT
__asm fistp dword ptr [edx];
}
Would it be safer to explicitly set the control word before the round and restoring the previous control word once we're done? Or is this redundant?
It seems that if I wanted to "Round to nearest" (or "Round to even" since that is apparently the intel implementation based on the results for halfintegers) I should not assume the control word had not been modified by other code run previously.
It would seem the code is still vulnerable to rounding errors if some prior code had set the control word to something other than the default 037F  Same as FINIT (round to nearest, all exceptions masked, 64bit precision). This statement is based on the content in MSDN KnowledgeBase article Q126455  specifically the line that says "Application programmers can avoid rounding errors in the second bug by not overriding the default rounding modes."
Since we are not sure if some prior code overrode the default, we need to explicitly set it to be sure.
By the way, thanks for the article. It has been a big help.





Does anyone know how to round to an arbitrary number?
For example, rounding to the nearest 2.5, 0.5 or 1.33
Nearest 2.5 case: 2.2 rounds down to 2, 2.3 rounds up to 2.5, 2.7 rounds down to 2.5, 2.8 rounds up to 3
Thanks.





Did anybody experienced problem with modf function? In most cases it works fine. But in some instances it would not work properly. Ex: 4570.0000 it would return 0.9999999 in fraction and 4569.0000 in integer part. Any ideas?
Thanks
Gene





The following function is a sample of how to round to the nearest .33 or .66
double GetNearest33( const double dRawNumber )
{
double dBaseVal = floor(dRawNumber);
double dBaseDiff = dRawNumber  dBaseVal;
double dToNearest33 = (double)((int)((dBaseDiff + 0.165)*100.0)/33*33)/100.0;
(dToNearest33 == 0.99)?dToNearest33 = 1.0:dToNearest33;
(dToNearest33 == 0.66)?dToNearest33 = 0.67:dToNearest33;
return dBaseVal + dToNearest33;
}





this rounds to the nearest x
the idea is to convert the number to fixed point, using the base that you want to round to, then convert back to floating point.
float GetNearest( float number, float fixedBase ) {
if (fixedBase != 0 && number != 0) {
float sign = number>0?1:1;
number*=sign;
number/=fixedBase;
int fixedPoint = (int)floor(number+.5f);
number = fixedPoint*fixedBase;
number*=sign;
}
return number;
}





Lately, I've been trying to come up with a good rounding function that will even handle double SNAFUs. If you want a good example of one, simply set a double var to 105, multiply it by 1.15, and then by 20. Like below:
//dddDDDdddDDDdddDDD
double dTmp = 105.0;
dTmp *= 1.15; // equals 120.75....or does it?!!!
dTmp *= 2; // equals 241.5....or does it?!!!
//dddDDDdddDDDdddDDD
Now, when you try to round dTmp with a rounding routine, it will roung down to 241, instead of correctly to 242. I'm sure a lot of you know that this is caused by a limitation of representing fractions in a binary format *1.15 is the cause, I'm pretty sure that 0.15 cannot be represented correctly*.
My question is this: Has anyone seen a rounding alg that can handle one of these "off by .000000000000000001" doubles? I'm currently working on one right now, but my head is starting to get sore from banging it against the wall. I'm experimenting with binary arithmatic on this, but I'm not really sure if that is the way to go. Also, I think that IEEE 7541984 may hold the answers, but I don't really want to spend $54 on it...
Dere





In windows I don't know of anyway with the current compilers, used to be under unix you had quad precision as an option. The only thing bad about that was that all you numbers were quad, but none of the intrinsic functions (sin, cos, sqrt) were quad so you had to write your own





Actually when it comes to rounding it is common practice as well as maybe an IEEE specifiaction that for 0.5 all odd numbers round down and all even number round up. This is so that there is not a statistical anomaly when rounding.
eg. 1.5 rounds to 1, 2.5 rounds to 3.
eg.
(1.5 + 2.5) / 2 = 2
rounded properly
(1 + 3) / 2 = 2
always .5 round up
(2 + 3) / 2 = 2.5
Of course depending on the number of odd and even numbers you get closer to the real average, but you definately don't get further away!





RoundValue and FloatToInt will not work properly, because frndint instruction will not round to nearest, but to even for equidistant cases (ex. 0.5, 2.5, etc.).





We all know that floating point numbers aren't necessarily equal. Given the following code:
double x = 1.0;
double y = 1.0;
we cannot assume that this line will return TRUE:
if (x == y)...
I use this function for comparing raw doubles:
///
BOOL AlmostEqual(double n1, double n2, int decplaces)
{
if (decplaces == 0)
{
return (floor(n1) == floor(n2));
}
double divider = 10.0;
for (int i = 1; i <= decplaces; i++);
{
divider *= 10;
}
return (n1 >= n2 && n1 < n2 + (1 / divider));
}
Simply pass in the two values you're comparing, and how many decimal places to compare, and you're all set.





I do something similar, but only take the difference between x and y and compare that to some minimal value (below which I consider to be zero).
eg:
double nThreshold = 0.000001;
// Where nThreshold is smallest number I consider valid
bool IsEqual(double x, double y, double nThreshold)
{
double nDifference = abs(x  y);
return ( nDifference < nThreshold ) ? true : false;





I agree, FloatsEqual will definitely not work appropriate.
I have this solution, exactly based on the definition of DBL_EPSILON. As I understand this definition DBL_EPSILON is valid only for a window around 1.0.
/*static*/ bool Double::IsEqual(double a, double b, double epsilon /* = DBL_EPSILON */)
{
const double& e = epsilon;
if (b == 0.0) {
// b has all bits zero
if (a > 0.0  a < 0.0)
// a has significant bits, thus a and b are not equal
return false;
// a has no bit set, thus a and b are equal
return true;
}
const double q = a/b;
if (q<0) {
//q negative, thus a and b have different signs, thus a and b are not equal
return false;
}
//q is positive, thus a and b must have the same sign
//q must be 1.0 if a and b are equal
//check, whether q is in epsilon window around 1.0
if (q > 1.+e) return false;
if (q < 1.e) return false;
return true;
}
Thomas Haase






General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.

