Click here to Skip to main content
13,348,514 members (91,428 online)
Click here to Skip to main content
Add your own
alternative version

Tagged as


1 bookmarked
Posted 25 Jul 2014

Reliance on Implementation Details

, 25 Jul 2014
Rate this:
Please Sign up or sign in to vote.
Reliance on implementation details

Recently, I stumbled across an issue in a legacy VB.NET app which didn't appear to make any sense. The issue involved determining the precision of a Decimal which was giving different results for exactly the same value.

First of all, I wrote a quick test to attempt to replicate the problem, which appeared to happen for 0.01:

private decimal expectedDecimalPlaces = 2;
public void Test2DecimalPoint_WithDecimal_ExpectSuccess()
    decimal i = 0.01m;
    int actual = Program.Precision(i);
    Assert.AreEqual(actual, expectedDecimalPlaces);

This passed, then I'd noticed in a particular method call the signature was expecting a Decimal, but was instead being supplied a Float (yes option strict was off [1]), meaning the Float was being implicitly converted. Quickly writing a test incorporating the conversion:

 private decimal expectedDecimalPlaces = 2;
public void Test2DecimalPoint_CastFromFloat_ExpectSuccess()
    float i = 0.01f;
    int actual = Program.Precision((decimal)i);
    Assert.AreEqual(actual, expectedDecimalPlaces);

Causes the issue:

It seems to think 0.01 is to 3 decimal places!

So what's going on here? How can a conversion affect the result of Precison()? Looking at the implementation, I could see it was relying on the individual bits the Decimal is made up from, using Decimal.GetBits() to access them:

 public static int Precision(Decimal number)
    int bits = Decimal.GetBits(number)[3];
    var bytes = BitConverter.GetBytes(bits)[2];
    return bytes;

The result of Decimal.GetBits() is a 4 element array, of which the first 3 elements represent the bits that go to make up the value of Decimal. However, this method relies only on the fourth set of bits - which represents the exponent. In the first test, the decimal value was 1 with exponent 131072, the failed test had 10 and 196608.

When converting to binary, we see the difference more clearly, I've named them bitsSingle for the failed test and bitsDecimal for the passing test:

bitsSingle 00000000 00000011 00000000 00000000
|\-----/ \------/ \---------------/
| | | |
sign <-+ unused exponent unused
| | | |
|/-----\ /------\ /---------------\
bitsDecimal 00000000 00000010 00000000 00000000
NOTE: exponent represents multiplication by negative power of 10

As you can see, the exponent for bitsSingle is 3 (00000011) whereas the exponent for bitsDecimal is 2 (00000010), which represents negative powers of 10.

Looking back at the original numbers, we can see how these both accurately represent 0.01:

bitsSingle has a value of 10, with an exponent of -3 = 10 <sup>3</sup>
bitsDecimal has a value of 1, with an exponent of -2 = 10 -<sup>2</sup>

As you can see, Decimal can represent the same value even though the underlying data differs. Precision() is only relying on the exponent and ignoring the value, meaning it's not taking into account the full picture.

But why is the conversion storing this number differently than when instantiated directly? It just so happens that creating a new Decimal (which uses the Decimal constructor) uses a slightly different logic than that of the cast. So even though the number is correct, the underlying data is slightly different.

This brings us to the point of the article. The big picture here is to remember that you should never rely on implementation details, rather only what can be accessed through defined interfaces, whether that be a webservice, reflection on a class, or peeking into the individual bits of a datatype. Implementation details can not only change, but in the world of software - are expected to.

If you want to play around with the examples above, I've uploaded them to GitHub.

[1]I know it's not okay and there isn't a single reason for this, however as usual with a legacy app, we simply don't have the time / money to explicitly convert every single type in a 20,000 + loc project.


This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


About the Author

Matthew Edmondson
Software Developer
United Kingdom United Kingdom
Selected articles are published on codeproject. For all of my content, including how to contact me please visit my blog.

You may also be interested in...

Comments and Discussions

GeneralMy vote of 5 Pin
Mihai MOGA13-Aug-14 4:23
professionalMihai MOGA13-Aug-14 4:23 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.

Permalink | Advertise | Privacy | Terms of Use | Mobile
Web04 | 2.8.180111.1 | Last Updated 25 Jul 2014
Article Copyright 2014 by Matthew Edmondson
Everything else Copyright © CodeProject, 1999-2018
Layout: fixed | fluid