Click here to Skip to main content
15,892,059 members
Please Sign up or sign in to vote.
3.00/5 (1 vote)
See more:
hi.

I'm trying to compute a number by multiplication that exceeds int32:
100 * 10000 * 10000
DataColumn throws Exception in spite of Datatype is double.

using System.Data;

namespace Sample
{
    static class Program
    {
        static void Main()
        {
            DataColumn dc = new DataColumn("dc", typeof(double));
            DataTable dt = new DataTable();
            dt.Columns.Add(dc);
            DataRow dr = dt.NewRow();
            dt.Rows.Add(dr);
            //dc.Expression = "100 * 10000 * 10000"; // --> System.OverflowException: Value was either too large or too small for an Int32.
            dc.Expression = "100.0 * 10000.0 * 10000.0"; //--> works, but doesn't help me
        }
    }
}


Is there any other way (as I don't know which terms are given by user) to force DataSet to use double internally? This also occurs if I use DataTable.Compute.

In code I'd write
double d = 100d * 10000d * 10000d;

instead of
double d = 100 * 10000 * 10000;



Thanks, vmp
Posted
Comments
CPallini 29-Sep-14 8:53am    
Why specifying double constants "works but doesn't help me"?
vmp5 29-Sep-14 9:16am    
Because this is given by the user. He has kind of a formula designer and can create his own expressions. Of course I could parse the expression string and add ".0" to each term (after checking if this is culture dependant), but that doesn't seem to be a proper solution to me, as I'd need to do the parsing (including all possible operators, column names etc), that should imo be the DataSet's job.
So actually my question is: "Can you force the DataSet parser to interpret all numbers as doubles". I just want to make sure there isn't anything obviuous that I couldn't google, as I'm not a big DataSet expert.
Sinisa Hajnal 29-Sep-14 9:22am    
Then why not declare that field sources as decimal(18,0) or some such (number for oracle and access) - and let dataset treat it as such? Then you would have the multiplication of three doubles.

Or limit the number to maxInt and report invalid use to the user...
vmp5 29-Sep-14 9:35am    
I am sorry, I'm afraid I don't completely understand. Which field are you suggesting to declare as decimal(18,0)? I'm already declaring the DataColumn as double. If I declare it as
DataColumn dc = new DataColumn("dc", typeof(decimal));
it's the same effect and afaik you can not be more specific with .net decimals.
Sinisa Hajnal 29-Sep-14 9:43am    
Sorry, I meant the source fields for that computation. By your description it looks like the user enters three numbers (or maybe two) and they are multiplied. If you declare those fields as decimal (sorry, again my assumption was that the data comes from the database - therefore the reference) you will have multiplication of decimals...

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

  Print Answers RSS
Top Experts
Last 24hrsThis month


CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900