The Lounge is rated Safe For Work. If you're about to post something inappropriate for a shared office environment, then don't post it. No ads, no abuse, and no programming questions. Trolling, (political, climate, religious or whatever) will result in your account being removed.
If the divisor in theory, mathematically, "should have been" zero but isn't because of imprecision as floats are not 'real' numbers of unlimited precision, then the underlaying algorithm should indeed be closely inspected and investigated.
If the avoidance of a divide by zero exception is an artifact of limited precision, then I am getting close to call that a faulty implementation of the algorithm.
If, on the other hand, the divisor is from a mathematical point of view - really non-zero, but has been zeroed by some software or hardware algorithmic rules, then these rules should be reconsidered. It should never be tolerated that a small, non-zero divisor causes a divide-by-zero error. That is simply incorrect! If the divisor is "valid", yet so small that it should be equivalent to zero, then it must be handled as a value (like zero), not causing an exception. If it could be handled in float format, it should be handled similarlu in decimal format!
If gravity can be zero, you must be prepared for it. If it is so small that it rounds off to zero, you must be prepared for it.
Note that IEEE 754 double precision has 52 mantissa bits - 53 if you count the hidden one. That is a precision of about 16 decimal digits. If you need even more, you have to go to 128 bit float, with about 33 decimal digits of precision.
If it is so small that it doesn't round off to zero in 128 bit float representation, but does round to zero in decimal representation, that is rather accidental. The next weaker level of gravitation might round off to zero even in 128 bit FP, and causing a similar divide by zero. I certainly think that any algorithm implementation where the divisor might either be "true" zero, or so low that it is rounded off to zero (whether in decimal digit #34 for 128 bit FP or in decimal digit #30 for decimal representation), but nevertheless tries to do the division without checking, is faulty. The decimal format as such is not to blame.
In clinical chemistry it is common for the concentration of an analyte to be reported as "< 0.10 mg/dl" when that is the sensitivity of the test. Sensitivity of a test is defined by the random analytical variation of the test in the absence of ANY analyte, i.e. the "noise" in the measurement. cf. Gaussian standard error of estimate.
In this example, a "result" of 0.005 mg/dl is analytically indistinguishable from 0.00 mg/dl.
The same issues of analytical precision apply to the extremes of floating point calculation.
The problem in this case isn't an avoidance of a divide by zero, it's that the value is so close to zero that system.decimal can't handle the number and truncates it to zero. The wider dynamic range of system.double can handle it.
If your application really needs to handle (application) epsilons as non-zero, then you certainly have to choose a data format that allows you to handle epsilons. Maybe decimal isn't for you.
Note that even double is guaranteed to handle you application's epsilons. The Double struct defines a Double.Epsilon which is the smallest epsilon that can be represented as a double. If your application needs to handle smaller values, like Double.Epsilon/2 or smaller, the value will end up as zero. Using double does not avoid the problem entirely, just moved the borderline somewhat.
If you make a cost calculation suggesting a unit sales price of USD 2.5000000001, and the customer pays two and a half dollar, then your accounting application probably does not want to handle a balance due of USD 0.0000000001, but set it to zero. Now this certainly can be represented by a decimal, but your accounting system should round the sales price to the number of decimal places it finds relevant - the decimal struct provides such a Round method.
Your application should be conscious about epsilons: If it needs to be treated as non-zero, you use float / double (crossing fingers for the application epsilon being no smaller than Double.Epsilon). If (application) epsilons have no relevance, you should zero them, e.g. using a Round function.
As the decimal representation does not provide any bit pattern for infinities, the divide operator has no way of returning any 'decimal.PositiveInfinity' - it doesn't exist. Similarly, there is no representation of Not a Number (NaN). Some values can be represented in float/double but not in decimal; others can be represented in decimal but not in float/double.
If you really want / need to handle infinities in decimal, you can define your own AugmentedDecimal struct that is a decimal plus the required flag bits for flagging the value as, say, positive / negative infinities or NaN. Then, a DivideByZeroException handler may set these flag bits appropriately. Of course you then have to define the operators for AugmentedDecimal to check the flag bits as appropriate before performing the operation.
This is built into the float hardware. decimal is essentially done in software, even without these augmentations. So it is slow anyway. But if that is what you need, that is the price you have to pay.
Then: My guess is that the tiny part of float/double handling software that handles infinities properly is a fraction of a percent. You don't get an exception here, but when you try to use the value as a normal number (because no exception has signaled to you the exceptional value), your program may do funny things. Tracing back to the division that caused the infinity may be non-trivial.
If you do write code that properly and throughout handles infinities (i.e. within that fraction of a percent), then I am sure that you would have no problems writing a proper handler for the decimal DivideByZeroException as well.
One of the issues of using floating-point (binary / decimal / hex) is the issue of "wobbling precision". For example, the number 1.234 has less actual bits of precision than the number 8.765, even though they both have the same number of digits. This complicates error analysis.
Binary floating-point has the least "wobbling precision", and therefore - all other things being equal - it is preferable for use in scientific calculations.
If you need precision greater than that provided by 'double', you may either use packages like MPFR (which provide IEEE 754-style types of greater precision) or construct a 'double-double' or 'quad-double' type out of the sum of 2 or 4 'double's (aka expansions). Note that if you go the second route, you will have to perform a careful error analysis on your code, because some of the lemmas applicable to IEEE 754-style floating point types are not applicable to expansions.
Accurate scientific computing requires careful attention to details like these!
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
My guess is one of two: Either, the application calls for the higher precision (almost 29 decimal digits for decimal, less than 17 decimal digits for double), but the numeric range is sufficiently large. This level of precision is rarely needed, except in some extreme scientific calculations.
Or, the application calls for full control of roundoff errors, with exact value representation of e.g. decimal fractions, rather than a binary approximation. This is a common requirement in e.g. accounting systems. (An alternative to decimal is of course to use a long value to represent an integer number of millicents or whatever the required precision of the transactions. Using decimal will usually require fewer application code lines, as you can deal directly with the unscaled amounts.)
When I explain programming to beginners (and also to some at intermediate level needing some correction...), I know from experience that for some, understanding the principal difference between int and float is difficult. So I rather call it 'counts' and 'measurements'. The price of an apple is not a measurement, it is a count of the pennies you have to pay. So we use an int - int is a synonym for a count. If you weigh that apple, you do not count the grams, or milligrams, or micrograms - you will not get a measurement exactly right. So we use a float - float is a synonym for a measurement value. Even if you claim that the weight of the apple is exactly 123.456 grams, it nevertheless is a measurement, calling for a float, not a count of milligrams.
Making beginners understand the difference between a count and a measurement is far easier than to explain int and float. In that framework, I really would like to consider decimal as a "counting" value, with the extension that it lets us count not only Euros or dollars, but even cents and tenths of a cent. With BCD, this was very explicit at the representation level as well; with decimal, it is blurred. C# documentation classifies it as a 'floating point numeric type'. Yet, decimal is functionally like BCD values, rather than as measurement float / double. Also by implementation, it is a scaled integer (i.e. counting) value.
I think using decimal for measurement values is an abuse. If you really need more than the 17 digits of a double, then you should go for one of the 'infinite precision' libraries provided for almost any language. (Writing one is not that difficult - I made one myself in my student days.) If you measure the dimension of the universe down to the millimeter, you are still making a measurement, not a count.
And if the range of decimal is not sufficient - e.g. if you want to count all the atoms in the universe - and decimal doesn't provide a sufficient numeric range, then you should go for a software 'infinite integer range' library. (I never saw one, but writing one is almost down to the trivial level, if you really have the need! Biggest problem is formatting the number as text!)
The problem is that lots of people consider a weight of 454 gram an integer numeric value, rather than a real, especially if they use a digital scale. If you ask "the man in the street" to explain "integer" and "real" numbers, you may be surprised by the (lack of) answers you get! (There is nothing 'un-real' about, say, 'five' ) You cannot expect beginners in programming to have a solid background in mathematics.
(Almost) everyone have little problems distinguishing between counting and measuring. Well, that is my experience. YMMV.
If I saw this in my local food store, I might give it a try.
Unless it appears to be sweet like a fruit jam. A smoked, meaty bredspread on dark, whole grain bread could be very tasty, assuming that it would be something like if you put some bacon through your meat grinder.
Sometimes, you use a tiny amount of sugar in meats to round off the taste, without making it 'sweet'. If this is 50% bacon and 50% sugar, then it is not for me!
A couple of weeks ago I tried unsuccessfully to get slots for the both of us to get vaccinated, but the website kept crashing and the phone number was busy all day. In the end we got a slot for just one of us at a location some 60 miles from our home! Yuck!
This morning the second batch of bookings became available, a mere 8 miles for home. We got slots for the both of us for Friday. And the website only crashed some 10 times before we got our bookings! The government in Florida can be proud. Their IT guys are getting better! Still, we are both in a very good mood now.