The paging of a large database resultset in Web applications is a well known problem. In short, you don't want all the results from your query to be displayed on a single Web page, so some sort of paged display is more appropriate. While it was not an easy task in the old ASP, the
DataGrid control in the ASP.NET simplifies this to a few lines of code. So, the paging is easy in ASP.NET, but the default behavior of the
DataGrid is that all resulting records from your query will be fetched from SQL server to the ASP.NET application. If your query returns a million records this will cause some serious performance issues (if you need convincing, try executing such a query in your web application and see the memory consumption of the aspnet_wp.exe in the task manager). That's why a custom paging solution is required where desired behavior is to fetch only the rows from the current page.
There are numerous articles and posts concerning this problem and several proposed solutions. My goal here is not to present you with an amazing solves-it-all procedure, but to optimize all the existing methods and provide you with a testing application so you can do evaluation on your own. Here is a good starting point article which describes many different approaches and provides some performance test results:
How do I page through a recordset?
I was not satisfied with the most of them. First, half of the methods use old ADO and are clearly written for the "old" ASP. The rest of the methods are SQL server stored procedures. Some of them yield poor response times as you can see from the author’s performance results at the bottom of the page, but several have caught my attention.
The three methods I decided to closely look into are the ones the author calls
Rowcount. I'll refer to the second method as the
Asc-Desc method in the rest of this text. I don't think DynamicSQL was a good name, because you can apply dynamic SQL logic to the other methods too. The general problem with all these stored procedures is that you have to assess which columns you'll allow sorting for and that won't probably be just the PK column(s). This leads to a new set of problems – for each query you want to display via paging you must have as many different paging queries as you have different sorting columns. This means that you will either have a different stored procedure (regardless of the paging method applied) for each sorting column or you'll try to generalize this to only one stored procedure with the help of dynamic SQL. This has a slight performance impact, but increases maintainability if you need to display many different queries using this approach. Thus, I’ll try to generalize all of the stored procedures in this text with dynamic SQL, but in some cases it will be possible to achieve only a certain level of generalization, so you’ll still have to write separate stored procedures for some complex queries.
The second problem with allowing other sorting columns beside the PK column(s) is that if those columns are not indexed in some way, none of these methods will help. In all of them a paged source must be sorted first and the cost of using ordering by non-indexed column is immense for large tables. The response times are so high that all the procedures are practically unusable in this case (the response varies from couple of seconds to couple of minutes depending on the size of the tables and the starting record being fetched). The indexing of other columns brings more performance issues and may be undesirable, for example it might significantly slow you down in a situation where you have a lot of daily imports.
The first one I would comment on is the
TempTable method. This is actually a widely proposed solution and I encountered it several times. Here is another article that describes it along with the explanation and a sample how to use custom paging with the
ASP.NET DataGrid Paging Part 2 - Custom Paging
The methods in both articles could be optimized with just the Primary Key data copied to the temp table and then doing the join with the main query. Therefore, the essence of this method would be the following
CREATE TABLE #Temp (
ID int IDENTITY PRIMARY KEY,
INSERT INTO #Temp SELECT PK FROM Table ORDER BY SortColumn
SELECT ... FROM Table JOIN #Temp temp ON Table.PK = temp.PK ORDER BY temp.ID
WHERE ID > @StartRow AND ID < @EndRow
The method can be optimized further by copying the rows to the temp table until the end paging row is reached (
SELECT TOP EndRow...
), but the point is that in the worst case – for a table with 1 million records you end up with 1 million records in a temp table as well. Considering all this and having looked upon the results in the article above, I decided to discard this method from my tests.
This method uses default ordering in a subquery and then applies the reverse ordering. The principle goes like this
DECLARE @temp TABLE (
PK NOT NULL PRIMARY
INSERT INTO @temp
SELECT TOP @PageSize PK FROM (
SELECT TOP (@StartRow + @PageSize)
ORDER BY SortColumn )
ORDER BY SortColumn
SELECT ... FROM Table JOIN @Temp temp ON Table.PK = temp.PK
ORDER BY SortColumn
Full Code – Paging_Asc_Desc
The base logic of this method relies on the SQL
SET ROWCOUNT expression to both skip the unwanted rows and fetch the desired ones:
SET ROWCOUNT @StartRow
SELECT @Sort = SortColumn FROM Table ORDER BY SortColumn
SET ROWCOUNT @PageSize
SELECT ... FROM Table WHERE SortColumn >= @Sort ORDER BY SortColumn
Full Code – Paging_RowCount
There are 2 more methods I’ve taken into consideration, and they come from different resources. The first one is well known triple query or the
SubQuery method. The most thorough approach is the one I’ve found in the following article
Server-Side Paging with SQL Server
Although you'll need to be subscribed, a .zip file with the
SubQuery stored procedure variations is available. The Listing_04.SELECT_WITH_PAGINGStoredProcedure.txt file contains the complete generalized dynamic SQL. I used a similar generalization logic with all other stored procedures in this text. Here is the principle followed by the link to the whole procedure (I shortened the original code a bit, because a recordcount portion was unnecessary for my testing purposes).
SELECT ... FROM Table WHERE PK IN
(SELECT TOP @PageSize PK FROM Table WHERE PK NOT IN
(SELECT TOP @StartRow PK FROM Table ORDER BY SortColumn)
ORDER BY SortColumn)
ORDER BY SortColumn
Full Code – Paging_SubQuery
I’ve found the last method while browsing through the Google groups, you can find the original thread here. This method uses a server-side dynamic cursor. A lot of people tend to avoid cursors, they usually have poor performance because of their non-relational, sequential nature. The thing is that paging IS a sequential task and whatever method you use you have to somehow reach the starting row. In all the previous methods this is done by selecting all rows preceding the starting row plus the desired rows and then discarding all the preceding rows. Dynamic cursor has the
FETCH RELATIVE option which does the “magic” jump. The base logic goes like this
DECLARE @tblPK TABLE (
PK NOT NULL PRIMARY KEY
DECLARE PagingCursor CURSOR DYNAMIC READ_ONLY FOR
SELECT @PK FROM Table ORDER BY SortColumn
FETCH RELATIVE @StartRow FROM PagingCursor INTO @PK
WHILE @PageSize > 0 AND @@FETCH_STATUS = 0
INSERT @tblPK(PK) VALUES(@PK)
FETCH NEXT FROM PagingCursor INTO @PK
SET @PageSize = @PageSize - 1
SELECT ... FROM Table JOIN @tblPK temp ON Table.PK = temp.PK
ORDER BY SortColumn
Full Code – Paging_Cursor
Generalization of Complex Queries
As pointed out before, all the procedures are generalized with dynamic SQL, thus, in theory, they can work with any kind of complex query. Here is a complex query sample that works with
SELECT Customers.ContactName AS Customer,
Customers.Address + ', ' + Customers.City + ', ' +
Customers.Country AS Address,
SUM([Order Details].UnitPrice*[Order Details].Quantity) AS
[Total money spent]
INNER JOIN Orders ON Customers.CustomerID = Orders.CustomerID
INNER JOIN [Order Details] ON Orders.OrderID = [Order Details].OrderID
WHERE Customers.Country <> 'USA' AND Customers.Country <> 'Mexico'
GROUP BY Customers.ContactName, Customers.Address, Customers.City,
HAVING (SUM([Order Details].UnitPrice*[Order Details].Quantity))>1000
ORDER BY Customer DESC, Address DESC
The paging stored procedure call that returns the second page looks like this
INNER JOIN Orders ON Customers.CustomerID = Orders.CustomerID
INNER JOIN [Order Details] ON Orders.OrderID = [Order Details].OrderID',
'Customers.ContactName DESC, Customers.Address DESC',
'Customers.ContactName AS Customer,
Customers.Address + '', '' + Customers.City + '', '' + Customers.Country
SUM([Order Details].UnitPrice*[Order Details].Quantity) AS [Total money spent]',
'Customers.Country <> ''USA'' AND Customers.Country <> ''Mexico''',
'Customers.CustomerID, Customers.ContactName, Customers.Address,
HAVING (SUM([Order Details].UnitPrice*[Order Details].Quantity))>1000'
Note that in the original query, aliases are used in the
ORDER BY clause. You can't do that in paging procedures, because the most time-consuming task in all of them is skipping rows preceding the starting row. This is done in various ways, but the principle is not to fetch all the required fields at first, but only the PK column(s) (in case of
RowCount method the sorting column), which speeds up this task. All required fields are fetched only for the rows that belong to the requested page. Therefore, field aliases don't exist until the final query, and sorting columns have to be used earlier (in row skipping queries).
RowCount procedure has another problem, it is generalized to work with only one column in the
ORDER BY clause. The same goes for
Cursor methods, though they can work with several ordering columns, but require that only one column is included in the PK. I guess this could be solved with more dynamic SQL, but in my opinion it is not worth the fuss. Although these situations are highly possible, they are not that frequent. Even if they are, you can always write a separate paging procedure following the principles above.
I used these 4 methods in my tests, if you have a better one, I’d be glad to know about it. Nevertheless, I wanted to compare these methods and measure their performance. The first thought was to write an ASP.NET test application with paged DataGrid and then measure page response. Still, this wouldn’t reflect the true response time of the stored procedures, so the console application seemed more appropriate. I also included a web application, not for performance testing, but rather as an example of how DataGrid custom paging works with these stored procedures. They are both incorporated in the PagingTest Solution.
I used the auto generated large table for my tests and inserted around 500 000 records in it. If you don’t have a large table to experiment on, you can download the script for a table design and stored procedure for data generation here. I didn't want an identity column for my PK, I used the
uniqueidentifier instead. If you'll use this script, you may consider to add an identity after you generate the table. It will add numbers sorted by PK and you'll have an indication that correct page is fetched when you call a paging procedure with PK sorting.
The idea behind performance testing was to call a specific stored procedure many times through a loop and then measure the average response time. Also, in order to remove caching deviations and to model the real situation more accurately – multiple calls to a stored proc with the same page fetched each time seemed inappropriate. Thus, a random sequence of the same stored procedure with a set of different page numbers was required. Of course, a set of different page numbers assumes fixed number of pages (10 – 20) where each page would be fetched many times, but in a random sequence.
It’s not hard to notice that response times depend on the distance of the fetched page from the beginning of the resultset. The further the starting record is, more records need to be skipped. This is the reason I didn’t include first 20 pages in my random sequence. Instead I used the set of 2N pages. A loop was set to a (number of different pages)*1000. So, every page was fetched around 1000 times (more or less because of a random distribution).
Here are the results I've got - Paging_Results (MS Excell file)
The methods performed in the following order, starting from the best one -
Subquery. The behavior in the lower portion was especially interesting, because in many real situations you'll browse beyond the first five pages rarely, so the
Subquery method might satisfy your needs in those cases. It all depends on the size of your resultset and the prediction how frequently will the distant pages be fetched. You might use the combination of methods as well. As for myself, I decided to use the
RowCount method wherever possible. It beaves quite nice, even for the first page. The "wherever possible" part stands for some cases where it's hard to generalize this method, then I would use the
Cursor (possibly combined with the
SubQuery for the first couple of pages).
The main reason I wrote this article was the feedback from the vast programming community. In a couple of weeks I'll be starting work on a new project. The preliminary analysis showed that there's going to be a couple of very large tables involved. These tables will be used in many complex joined queries and their results will be displayed in the ASP.NET application (with sorting and paging enabled). That's why I invested some time in research and pursue for the best paging method. It wasn't just the performance that interested me, but also the usability and maintainability.
Now the invested time has started to pay off already. You can find a post by C. v. Berkel below (many thanks) in which he found a flaw in the
RowCount method. It won't work correctly if the sorting column is not unique. The
RowCount method performed the best in my tests, but now I am seriously considering not using it at all. In most cases sorting columns (besides the PK) won't be unique. This leaves me with the
Cursor method as the fastest and applicable to most situations. It can be combined with the
SubQuery method for the first couple of pages and possibly with the
RowCount method for unique sorting columns.
Another thing which may be worth mentioning is that there's a tiny flaw in the
Asc-Desc method as well. It always returns the
PageSize number of records for the last page and not the actual number (which may be lower than the
PageSize). The correct number can be calculated but since I don't intend to use this procedure (because of how it performed), I didn't want to improve it any further.