This small article will present how surprising can be conversion of decimals to bytes.
Sometimes we just want the array of bytes instead of some other structure, for example to calculate MD5 hash. If you are not careful, you can get different hashes for the same decimal numbers. Even worse: you can get different hashes for the same numbers calculated in the same way in case if one number calculation is compiled on the Visual Studio 2010 and the other on Visual Studio 2005.
Using the code
This is a simple example that the bytes representation of decimal can be different for the same number.
static void Main(string args)
decimal one = 1m;
PrintBytes(one + 0.0m);
PrintBytes(1m + 0.0m);
public static void PrintBytes(decimal d)
MemoryStream memoryStream = new MemoryStream();
BinaryWriter binaryWriter = new BinaryWriter(memoryStream);
byte decimalBytes = memoryStream.ToArray();
Console.WriteLine(BitConverter.ToString(decimalBytes) + " (" + d + ")");
This code will print different binary representation for number one and the result will be different on Visual Studio 2005 and different on Visual Studio 2010.
Visual Studio 2005:
Visual Studio 2010 (no matter which .Net framework):
As you can see decimal number one is represented in different way depending on how the calculation was made and depending on which compiler was used. You probably can find more inconsistency. Decimal number can be represented in different way.
1m is not exactly the same as
1.00m. etc.. You can probably find some rules example
1.0m + 1.00m =
2.00m (one zero + two zeros = two zeros). Maybe you can find some normalization of decimals. I found very simple one:
d = d / 1.0000000000000000000000000m can you find better? Just leave the comment.