Recently I stumbled across an issue in a legacy vb.net app which didn't appear to make any sense. The issue involved determining the precision of a `Decimal `

which was giving different results for exactly the same value.

First of all I wrote a quick test to attempt to replicate the problem, which appeared to happen for 0.01:

private decimal expectedDecimalPlaces = 2;
[TestMethod]
public void Test2DecimalPoint_WithDecimal_ExpectSuccess()
{
decimal i = 0.01m;
int actual = Program.Precision(i);
Assert.AreEqual(actual, expectedDecimalPlaces);
}

This passed, then I'd noticed in a particular method call the signature was expecting a `Decimal`

, but was instead being supplied a `Float`

(yes option strict was off [1]), meaning the `Float`

was being implicitly converted. Quickly writing a test incorporating the conversion:

private decimal expectedDecimalPlaces = 2;
[TestMethod]
public void Test2DecimalPoint_CastFromFloat_ExpectSuccess()
{
float i = 0.01f;
int actual = Program.Precision((decimal)i);
Assert.AreEqual(actual, expectedDecimalPlaces);
}

Causes the issue:

It seems to think 0.01 is to 3 decimal places!

So what's going on here? How can a conversion affect the result of `Precison()`

? Looking at the implementation I could see it was relying on the individual bits the `Decimal`

is made up from, using `Decimal.GetBits()`

to access them:

public static int Precision(Decimal number)
{
int bits = Decimal.GetBits(number)[3];
var bytes = BitConverter.GetBytes(bits)[2];
return bytes;
}

The result of `Decimal.GetBits()`

is a 4 element array, of which the first 3 elements represent the bits that go to make up the value of `Decimal`

. However this method relies only on the fourth set of bits - which represents the exponent. In the first test the decimal value was 1 with exponent 131072, the failed test had 10 and 196608.

When converting to binary we see the difference more clearly, I've named them bitsSingle for the failed test and bitsDecimal for the passing test:

bitsSingle 00000000 00000011 00000000 00000000
|\-----/ \------/ \---------------/
| | | |
sign <-+ unused exponent unused
| | | |
|/-----\ /------\ /---------------\
bitsDecimal 00000000 00000010 00000000 00000000
NOTE: exponent represents multiplication by negative power of 10

As you can see the exponent for bitsSingle is 3 (00000011) whereas the exponent for bitsDecimal is 2 (00000010), which represent negative powers of 10.

Looking back at the original numbers we can see how these both accurately represent 0.01:

bitsSingle has a value of 10, with an exponent of -3 = 10 ^{3}

bitsDecimal has a value of 1, with an exponent of -2 = 10 -^{2}

As you can see `Decimal`

can represent the same value even though the underlying data differs.* *`Precision()`

is only relying on the exponent and **ignoring **the value, meaning it's not taking into account the full picture.

But why is the conversion storing this number differently than when instantiated directly? It just so happens that creating a new `Decimal`

(which uses the `Decimal`

constructor) uses a slightly different logic than that of the cast. So even though the number is correct, the underlying data is slightly different.

This brings us to the point of the article. The big picture here is to remember that you should never rely on implementation details, rather only what can be accessed through defined interfaces. Whether that be a webservice, reflection on a class, or peeking into the individual bits of a datatype. Implementation details can not only change, but in the world of software - are expected to.

If you want to play around with the examples above I've uploaded them to GitHub.

[1]I know it's not okay and there isn't a single reason for this, however as usual with a legacy app we simply don't have the time / money to explicitly convert every single type in a 20,000 + loc project.