I did find out only that the number will be rounded to the number of specified precision digits, but there is no mention of which rounding method is used! I did send feedback that I was dissatisfied with their answer.
Thank you for the excellent link, at least it references a spec, namely IEEE.
Personally, I do not think it really makes any difference what the least significant digit is rounded to, but when you are converting a program and want to insure that the conversion is accurate, and the program has many options, the easiest approach is a batch file calling the two programs, supplying the different input files and options to be processed, and then using a file compare routine to compare the outputs for each test instance.
In the case of ENT compiled under Visual Studio 2008, I can see truncation, rounding to an even, and common rounding, all in one program. Makes it a bit difficult to "emulate" the original program.
I have examined all of the test output (the diffs) for all of the test instances, and all of my results match the ENT results except possibly for that last digit. At this point I am calling this a valid conversion, and a huge learning experience. At least my version is about 5 times as fast for huge files as ENT (16 GB - all DWORD values for a full 2^32 period) 6 min vs 28 min.
It may well be a problem with the graphics library (not the header) which is rather old and was probably not designed with multi-threading in mind. I would suggest you switch to one of the more current graphics libraries available in the Windows API.
One of these days I'm going to think of a really clever signature.
Last Visit: 31-Dec-99 18:00 Last Update: 21-Dec-13 21:14