Click here to Skip to main content
Click here to Skip to main content

Very fast test data generation using exponential INSERT

, 28 Aug 2009
Rate this:
Please Sign up or sign in to vote.
Instead of using incremental INSERT to generate test data, this method effectly copies the existing data multiple times.

Introduction

This article (my first) will describe an algorithm that enables a large amount of data to be generated very quickly using a SQL query. The test data can be static or incremental, such as “Item Name” and “Item ID”, respectively, as shown below:

tableSample.png

Background

One of the tasks I did in a project involves generating a testing table with 103,680,000 records. The conventional method of data generation would take a month; hence, a fast method of data insertion was required. The new method took only 5 hours.

Using the code

Conventional method – Sequential INSERT

The conventional way of generating a list of numbers from 0…100000 would be using a loop and an INSERT statement as follows:

CREATE TABLE #tempTable([Item ID] [bigint], [Item Name] nvarchar(30))
DECLARE @counter int
SET @counter = 1
WHILE (@counter < 100000)
BEGIN
        INSERT INTO #tempTable VALUES (@counter, 'Hammer')
        SET @counter = @counter + 1
END
SELECT * FROM #tempTable
DROP TABLE #tempTable

Let's call this method of data generation “Sequential INSERT”.

New method – Exponential INSERT

The new method effectively makes a copy of the existing data and appends it as new data, and does so repeatedly until the desired amount of data is generated.

Here is the code for the exponential INSERT:

CREATE TABLE #tempTable([Item ID] [bigint], [Item Name] nvarchar(30))
INSERT INTO #tempTable VALUES (1, 'Hammer')
WHILE((SELECT COUNT(*) FROM #tempTable) < 100000)
BEGIN
    INSERT INTO #tempTable ([Item ID], [Item Name])
        (SELECT [Item ID] + (SELECT COUNT(*) FROM #tempTable), 
                                 'Hammer' FROM #tempTable)
END
SELECT * FROM #tempTable
DROP TABLE #tempTable

Points of interest

The condition for the WHILE..LOOP is (SELECT COUNT(*)). This condition statement takes a long time to be evaluated. A faster method would be to calculate how many iterations are needed to generate the desired number of records, i.e., 100,000 records in this case, which is 2^17=131,072, so we can rewrite the code to stop after the 17th iteration.

It took 4 seconds to execute the number count from 1 to 100,0000; the exponential method took two seconds with the code below:

CREATE TABLE #tempTable([Item ID] [bigint], [Item Name] nvarchar(30))
INSERT INTO #tempTable VALUES (1, 'Hammer')
DECLARE @counter int
SET @counter = 1
WHILE(@counter <= 17)
BEGIN
    INSERT INTO #tempTable ([Item ID], [Item Name])
        (SELECT [Item ID] + (SELECT COUNT(*) FROM #tempTable), 
                           'Hammer' FROM #tempTable)
    SET @counter = @counter + 1
END
SELECT * FROM #tempTable
DROP TABLE #tempTable

Also, not only can you use this to increment a number field, but it can be applied to datetime fields as well.

History

  • This is v1.0.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

About the Author

weilidai2001

United Kingdom United Kingdom
No Biography provided

Comments and Discussions

 
GeneralMy vote of 2 PinmemberAborq31-Aug-09 21:50 
GeneralRe: My vote of 2 Pinmemberweilidai20011-Sep-09 0:03 
GeneralRe: My vote of 2 PinmemberAborq3-Sep-09 3:50 
QuestionWhy go row-by-row? PinmemberAborq31-Aug-09 21:37 
AnswerRe: Why go row-by-row? Pinmemberweilidai20011-Sep-09 0:04 
QuestionCould a variation be used to speed up creation of non-trivial records? Pinmembersupercat926-Aug-09 12:10 
AnswerRe: Could a variation be used to speed up creation of non-trivial records? Pinmemberweilidai200126-Aug-09 13:12 
GeneralRe: Could a variation be used to speed up creation of non-trivial records? Pinmembersupercat927-Aug-09 5:23 
The basic gist of your article is that SQL is often much faster when using a small number of commands to process many records each, than when using many commands to process one record each. When generating simplistic test data, it's possible to arrange things so that the number of commands is proportional to the log of the desired data set size.
 
My thought was that in many SQL variants, if one wants to add 1,000 records to a table, one has to perform 1,000 single-record insert commands. I was wondering whether it might be faster to generate a temporary or memory table where each record would hold (e.g.) ten records' worth of data, and then use ten commands to move that data into the real table? The net effect would be to use 110 commands to add 1,000 records. Have you ever tried such a thing? The number of commands required would be proportional to the square root of the number of records, as opposed to the logarithm, but that might still be better than linear. On the other hand, some SQL implementations might get unhappy with tables that have too many columns, so there'd probably be some practical limit.
GeneralRe: Could a variation be used to speed up creation of non-trivial records? PingroupMd. Marufuzzaman27-Aug-09 10:26 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.

| Advertise | Privacy | Mobile
Web04 | 2.8.140721.1 | Last Updated 28 Aug 2009
Article Copyright 2009 by weilidai2001
Everything else Copyright © CodeProject, 1999-2014
Terms of Service
Layout: fixed | fluid