Click here to Skip to main content
15,880,891 members
Please Sign up or sign in to vote.
0.00/5 (No votes)
Hi All,

I was wondering if anyone could give any advice on using the C# SqlBulkCopy class. I have been googling this problem but I think my needs are quite specific.

Basically, I'm performing a database conversion from an old Paradox DB to SQL Server 2008. I am using an OdbcReader and SqlBulkCopy to transfer data. This is working fine, except that the more rows that are inserted, the rate of insert descreases dramatically. To begin with, the process is loading 000s of rows a second, but by the time we get to 200,000 rows for example, it's inserting less than 100 per second.

There are no indexes at all on the table at this stage.

Is there something I can flush, or is there a better technique to achieve this?

Code block below.

Regards,
Martin.

C#
public void BulkLoad(String TableName, OdbcDataReader Reader) {
            System.Data.SqlClient.SqlBulkCopy bulkCopy = new System.Data.SqlClient.SqlBulkCopy(ConnectionString, SqlBulkCopyOptions.TableLock);
            bulkCopy.DestinationTableName = TableName;
            //bulkCopy.SqlRowsCopied += new SqlRowsCopiedEventHandler(OnSqlRowsCopied);
            //bulkCopy.NotifyAfter = 10;
            bulkCopy.BatchSize = 5000;
            bulkCopy.BulkCopyTimeout = 0;

            try {
                bulkCopy.WriteToServer(Reader);
            }
            catch (Exception ex) {
                throw ex;
            }
            finally {
                bulkCopy.Close();
                Reader.Close();
            }
        }
Posted
Comments
TheyCallMeMrJames 9-Jul-10 11:54am    
there's no triggers there on the destination table, are there?
Martin P. Davies 9-Jul-10 11:59am    
No, nothing of that sort. The table is literally created just before calling this method in with just the column definitions, no triggers, primary keys or indexes applied at this point.
Martin P. Davies 9-Jul-10 12:00pm    
Incidentally, I have also played about with the BatchSize property, but it doesn't really make any appreciable difference.
DaveAuld 9-Jul-10 15:08pm    
Is the transaction logs on the destination server becoming so large that the disk space on the machine is being fully consumed, resulting in paging operations on the hard disk and whole destination server grinding to a halt? (i had this on an SQL box that was handling a alarm event viewer with thousand of records per hour.)
Martin P. Davies 12-Jul-10 4:54am    
Hi daveauld, thanks for the suggestion. I would be surprised if this was the case, I'm only testing one table at a time at the moment, with only 500,000 rows in them. Plus, if you stop and restart, it speeds up again. Thanks for the suggestion though, I will have a look at it.

1 solution

Not an answer really, but I think it might be something to do with the speed of the Reader at the Paradox end. The only way I've managed to get around this is to split the Paradox tables up programmatically beforehand into 100,000 row tables.

This atleast removes the exponential slowdown. A 500,000 row table is now taking about 15 mins rather than an hour, and a million row table is now taking 30 minutes as opposed to nearly 4 hours.

For these smaller splits, there is a pause whilst the reader populates, and then the rows are transferred really quickly. For the larger bulk, the transfer begins straight away, so I'm thinking that perhaps the loading is 'catching-up' with the slow paradox reader, causing the slowdown.

Not really an answer as such, but the database transfer are atleast a lot quicker now.
 
Share this answer
 

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900