Click here to Skip to main content
Click here to Skip to main content
Go to top

Performance comparisons LinQ to SQL,ADO,C#

, 25 May 2008
Rate this:
Please Sign up or sign in to vote.
To compare existing approaches of accessing data (arrays,tables etc) with the new ones release with c#3.0 and LinQ.


Most you like me have been watching the release of VS 2008 from its beta versions. From the perspective of this article Microsoft added generics, query operators and LinQ support. I was always wandering why LinQ was added when the world has been comfortable with ADO programming with its support extended to objects and xml too; similarly use of generics/aggregations etc in C # 3.0 etc. With this came the purpose of this article. That is to compare existing approaches with the new ones release with c#3.0 and LinQ. As the scope of comparison is quite large I will restrict our discussion to

· Performance of reading data from SQL Server 2005 using and LinQ using a Stored procedure

· Performance of reading data from SQL Server 2005 using and LinQ by directly executing SQL statements.

· Performance of reading and manipulating xml files using existing c# approaches vs. LinQ approach

· Performance of accessing list of objects ( arrays in this article ) using traditional programming , LinQ and C# 3.0

· Performance of filling a dataset using ADO and LinQ and then performing a filer operation and then summing up something after filtering.

Before we jump into this article , I want to point out that there are many ways to accomplish a task programmatically, An effort has been made to write the best code possible , there are extra lines introduced so that to be fare in writing function for LinQ and ADO. As the code will be a shareware you are free to improve and build on it. The basic unit of measurement is counting ticks which have been used in this article ( System.StopWatch). Each piece of performance code has been run 500 times so as to get best performance values. Where ever possible I have added code to dispose/ garbage collect objects. All the code explained in this article is available in for download. The code is commented wherever required. Finally all the performance values are added to an Excel file Graphs.xls and visually shown by excel graphs.

Logic for running the function 500 times is given below

  1. create a dataset
  2. Start the stopwatch()
  3. run the test function ( under the ActualFunction #region)
  4. add the ticks to a data row
  5. stop the watch
  6. output the dataset to a xml and finally to the Graph.xls

The configuration used for getting the current performance data is given below

OS Name Microsoft(R) Windows(R) Server 2003

Ent Edition ,Version 5.2.3790 Service Pack 1 Build 3790

Processor x86 Family Intel ~2992 Mhz

Total Physical Memory 2,038.04 MB

Available Physical Memory 872.52 MB

Total Virtual Memory 2.85 GB

Page File Space 1.00 GB


What you need to run the samples

  • VS 2008
  • SQL Server 2005 with sample database installed ( AdventureWorks )
  • Run the InsertSP.sql script on the Adventureworks database, this will create a simple stored procedure to inserting values to the Sales.Customer table.
  • After the run of the 2 sample console applications it will produce the following xml file for all the 500 passes for each comparison, data from these xml files is to be manually imported to Graphs.xls file.

Before you begin read the following

If you are not familiar with LinQ, new features of C# 3.0 please read the following urls

Using the code

1. Inserting a row …ADO vs. LinQ using a stored procedure.

Function used for ADO is ADOInserting() in RetrieveUsingADO.sln. The LinQ counterpart of the same is LinQInserting() in RetriveUsingLinQ.sln.

2. Inserting a row …ADO vs LinQ without using a stored procedure.

Function used for ADO is ADOInsertingDirect() in RetrieveUsingADO.sln. The corresponding LinQ function is LinQInsertingDirect() in RetriveUsingLinQ.sln.

3. ADO vs. LinQ Reading from a table.

Function used for ADO is ADOReading() in RetrieveUsingADO.sln. The corresponding LinQ function is LinQReading() in RetriveUsingLinQ.sln. Just to make the function do a little more task I have added code to add all the values of the first column after reading.

4. Reading xml files ADO vs. LinQ.

Function used for ADO is xmlReading () in RetrieveUsingADO.sln. The corresponding LinQ function is LinQXmlRead() in RetriveUsingLinQ.sln. These functions first read and xml file, then apply a filter in them (pass values > 250 in this case) and then add all the values of the first column.

5. Accessing Objects (Arrays in this case) [Traditional vs. c#3.0 vs. LinQ]

Function used for LinQ part is LinQObjects1 (), for the traditional c# part is csharpObjects1 () and for the c# 3.0 part is LinQObjects2 () in RetriveUsingLinQ.sln.

All the functions first create an array of integers, and then crate a second array of only even numbers with their squares. The last step in the functions is to add all the values in the resultant subset.

6. Filling Dataset using LinQ and ADO and then performing filter operations

Function used for LinQ part is LinQQueryDataset (), and that for the ADO part is ADODataSetQuery(). Please note that such database operation are resource intensive as the minimum counter value is 15 * 106 .

Points of Interest

I have referenced the median value rather then the mean to help reduce the effect of out-liars in the graph as in a Windows operation system there are always more processes running and spikes in the graph does not necessarily means a fault of the code.

· Inserting a row …ADO vs. LinQ using a stored procedure.

The median value of ADO insert is far more than that of LinQ. LinQ wins in this situation.


· Inserting a row …ADO vs. LinQ without using a stored procedure.

      • The median value of ADO is more than that of LinQ which suggests that LinQ is a winner in this situation


· Reading from a table (ADO vs. LinQ).

      • There is a vas difference between median values of LinQ and ADO as far as reading from table is concerned. ADO wins here , but is because of maturity in the market and its tight connection with SQL server or because LinQ ( in my opinion ) created an overhead by creating the < IEnumerable> interface and an object if each item was draw in the LinQtoSQl dbml designer. For further improvement you should try with loosely typed datasets.Image3.JPG

· Reading xml files ADO vs. LinQ.

      • By comparing the mean values fro C# and LinQ, we have a close performance competition. There is only a small difference of 12304 ticks and by definition 1 second has about 1 billion ticks. LinQ marginally wins here.


· Accessing Objects (Arrays in this case) [Traditional vs. c#3.0 vs. LinQ]

      • Lets start with LinQ Obj2 , here the whole requirement of adding squares of only even numbers is narrowed down to one statement and that given below. The most interesting fact is that this function actually does not take any time to run! So c# 3.0 syntax wins here
        • double sum = nums.Aggregate(delegate(double Cursum, double curNum) { if (curNum % 2 == 0) { return (Cursum + (curNum * curNum)); } else { return (Cursum + 0); } });
      • Then comes LinQObj1 which runs a LinQ styled query on the array as given below. Please note that instead of the newFunction(temp) we could also have written temp%2 == 0, I just wanted to demonstrate using functions in the condition clause of the LinQ query.
        • var getSquaresLessthen500 = from temp in nums where temp == newFunction(temp) select temp*temp;
      • Last in the performance under this category is the traditional c# style syntax as given in csharpObjects1().


· Filling Dataset using LinQ and ADO and then performing filter operations.

And there is a big difference between mean values. I think that in the LinQ implementation the line where we create an object of DataRow and then add to the table is the place where performance is hitting. ADO implementation wins here.

table.LoadDataRow(new Object[] { tempRec.CustomerID, tempRec.TerritoryID, tempRec.AccountNumber, tempRec.CustomerType, tempRec.rowguid, tempRec.ModifiedDate}, true);



So my conclusion is that LinQ is not the overall winner (as expected!). While insert operations as better in LinQ; the read operation are great in ADO. Xml operations are by and large same (not too much of a difference) and object access is basically depending on the type of use, still aggregate was a good example. LinQ to datasets is pretty costly; I suggest using the ADO version and only use LinQ to objects if we already have dataset available and want to query.

For further improvement we should try and do bulky insert operations and also reads of different kind (like integer read, string read, block data read and substring reads). Again it is hard to draw conclusions based on the limited number of scenarios I have covered. There is scope of improvement in each scenario, but since I have timed all the operations, this will give us better insights when we design or architect next time.

Mean and Median Values

ADO Insert























































ADO DataSet

LinQ DataSet









Download source code and sample application here

Download - 473.12 KB


This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


About the Author


No Biography provided

Comments and Discussions

QuestionWHY LINQ TO SQL Pinmembersachinthamke19-Feb-10 3:17 
GeneralNot quite sure that your article is right PinmemberNecromantici11-Jun-08 17:31 
GeneralI can tell from the Introduction PinmemberAdam Tibi2-Jun-08 23:48 
GeneralComments on the code PinmemberSteven Campbell28-May-08 8:34 
QuestionDid you use compiled LINQ queries? Pinmember leppie 26-May-08 3:31