I have to use a kind of data type that accept digits after decimal point.
so I need to choose between float and decimal.
But it is true that working with decimal is about 20 times slower than working with float ?
This is my question.
Thank you !
Let's try that again. We have multiple ways of displaying a number.
The Decimal[^] value type represents decimal numbers ranging from positive 79,228,162,514,264,337,593,543,950,335 to negative 79,228,162,514,264,337,593,543,950,335. The Decimal value type is appropriate for financial calculations that require large numbers of significant integral and fractional digits and no round-off errors. The Decimal type does not eliminate the need for rounding. Rather, it minimizes errors due to rounding. For example, the following code produces a result of 0.9999999999999999999999999999 instead of 1.
Dim dividend AsDecimal = Decimal.One
Dim divisor AsDecimal = 3' The following displays 0.9999999999999999999999999999 to the console
Console.WriteLine(dividend/divisor * divisor)
The Double[^] value type represents a double-precision 64-bit number with values ranging from negative 1.79769313486232e308 to positive 1.79769313486232e308, as well as positive or negative zero, PositiveInfinity, NegativeInfinity, and not a number (NaN). It is intended to represent values that are extremely large (such as distances between planets or galaxies) or extremely small (the molecular mass of a substance in kilograms) and that often are imprecise (such as the distance from earth to another solar system), The Double type complies with the IEC 60559:1989 (IEEE 754) standard for binary floating-point arithmetic.
That's your float in .NET.
The floating operations are faster than the decimal operations, and, if all is well, integer operations would be even faster. You keep fixing on a formatted value of $2.53 in your wallet. With some creativity you could store those as 253 cents in your database. It is impossible to work with half-a-cent, since they do not exist. No more rounding errors, and the most optimal to work with: a bigint (Int64) to store cents.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
No you don't, you need to use some creative thinking. Dave K's response above is a good illustration of why you should never use float. As to allowing the user to enter something like 450.37, that is just a string of text. You can quite easily split that into two strings and convert each one to an integer. You could then multiply the first number by 100 and add the second, to use the smallest unti type, or use them separately as dollars and cents, or rupees and paisa, whatever.
Dim f AsSingle = 123456792.0F
Dim fsw AsNew Stopwatch
For i = 1To100000000
f *= 1.00000012F
Dim dsw AsNew Stopwatch
Dim d AsDecimal = 123456792.0F
For i = 1To100000000
d *= 1.00000012F
Console.WriteLine("Float (ms): " & fsw.ElapsedMilliseconds)
Console.WriteLine("Decimal (ms): " & dsw.ElapsedMilliseconds)
Console.WriteLine("Float is " & dsw.ElapsedMilliseconds / fsw.ElapsedMilliseconds & " faster")
Sorry , but if always I use integers , why vbnet and sql servers have other data types like float or decimal ? Is better to tell to remove this kind of data type from their products because we can use integers.
Ok , ok I see that you want to close this case as soon as possible.
But I think a forum is a place for discussion.
It's just a curiosity : Microsoft has included decimal in .net language programs and Sql server.
And why he not use this kind of data type in Excel ?
Or why don't use integers that you suggest ( and to multiply all the values by 100 , 1000 , 100000 .......) and as you think everything will be better.
I have the right to be against your opinion , you can be against to my opinion .. let's discuss. This is the forum.
The difficulty appears to be arising because you are comparing speed with accuracy - two things that by their very nature cannot be compared.
So you need to decide - do you want speed or accuracy?
If you really want more information regarding why floating point is used you will need to read up on computer hardware and architecture.
Basically computers are not decimal counting machines but binary counting machines. In order to maintain a decent processing speed numbers are stored as floating point,. The consequence is that repeated arithmetic operations on large numbers(numbers with many digits either side of the decimal point) can cause precision errors. Most people not running repeated arithmetic calculations with large numbers requiring a high degree of precision - Excel works perfectly well with floating point numbers.
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
Last Visit: 31-Dec-99 18:00 Last Update: 21-Sep-14 7:59