|
i dont have a alternate table
____ let me give u a clear picture
lets assume my table is details with coloumns ID(primary key)(Auto incriment), Name
so my insert statement is INSERT INTO DETAILS VALUES("abcd");
i want the same data 500000 times
do i need to run it in a loop
or is there any other method of adding multiple rows
|
|
|
|
|
See my cartesian product answer below.
|
|
|
|
|
I figured he had no data, but wanted 500K records to test with...
|
|
|
|
|
My bad. I thought he needs to insert data from somewhere to his DB.
50-50-90 rule: Anytime I have a 50-50 chance of getting something right, there's a 90% probability I'll get it wrong...!!
|
|
|
|
|
The easiest way is to use another table to create a cartesian product (ie: selecting from two or more tables with nothing linking them causes a cartesian product - one row for every possible combination).
The following select statement will return over a million rows on my test SQL Server:
select 'a'
from dbo.sysobjects as so1, dbo.sysobjects as so2
Extending this into an insert statement, you would do this:
insert into TABLENAME(FIELD1, FIELD2)
select 'a', 'b'
from dbo.sysobjects as so1, dbo.sysobjects as so2
which will basically insert the values a and b into your table a million or so times... If you have a table that autonumbers, then you will have a million consecutive numbers generated with which to test with.
Hope this helps.
|
|
|
|
|
thanks a lot bro its working for meee
|
|
|
|
|
No worries... glad I could help!
|
|
|
|
|
Wow! this is a real good way.
50-50-90 rule: Anytime I have a 50-50 chance of getting something right, there's a 90% probability I'll get it wrong...!!
|
|
|
|
|
If u are using
SQL SERVER 2008 , you can use
Table Valued Parameter .
Form your front end application(assuming .net), u need to send via a datatable to ur TVP.
Niladri Biswas
|
|
|
|
|
pls explain
i m sorry i didnt get wat u said ..
pls
|
|
|
|
|
How can I use the bulk insert function with an online stream of data?
Can I simply do:
Bulk Insert datatable
From 'http://www.websites/sample.txt'
or
Do I need to create a webclient, wrap it with a reader/writer (assuming "sreader") and do:
Bulk Insert datatable
From 'sreader'
Thanks!
|
|
|
|
|
You may find examples already built using google, especially for yahoo stock data
|
|
|
|
|
I work with both SQL Server Express 2008 and Microsoft Visual Web Studio Express 2008. However the two programs don't seem to be "in sync" properly.
Probably this is because I set up the database wrong, but I'm not sure. I first add a new database and a table in VWD (Visual Web), which creates two files, a MDF and a log file under
Documents\VisualWebStudio2008Express\Websites\Website\APP folder.
Next, I tried attaching that database file in SQL Server Management Studio Express. Since the Express version could not access any document saved under My Documents (which I later found out), I copied the two files and pasted them directly under my C:\ directory, and successfully attached the database into SQL Server Management by pointing its path to the database under C:\.
Now here's where all the weird things started happening.
1. The file path of the newly attached database in SQL Server Management Studio shows that it's now pointing to
Documents\VisualWebStudio2008Express\Websites\Website\APP folder
instead of C:\, which was where I originally pointed to. Hmm, strange...
2. To test it out, I inserted some data into the database via SQL Server, and when going back to VWD and opening up the table, surprisingly the file is updated accordingly.
3. Thinking that I wouldn't need the two files in C:\ anymore, I deleted them. But now going back to SQL Server, I found that the database is now gone! Of course in VWD I could still open the database because it's pointing to the file under Documents\VisualWebStudio....
This is all very confusing, and I couldn't figure out what's pointing at what actually.
Could you guys please help me out? I'd greatly appreciate it.
Thanks!!
|
|
|
|
|
You should not delete MDF and LDF database files. You should only tell the database to limit log file sizes and shrink the log file if it is already too big.
|
|
|
|
|
Thank you Ted, though I'm still confused about why in SQL Server Management Studio, it shows that it's pointing to the database under the VWD folder rather than the C:\ folder, and when I delete the files in the C:\ folder, I don't see them anymore in SQL Server Management Studio?
|
|
|
|
|
Hi,
I have got a test table in A database with user details. Now i have imported temp table in B database with user details.
Now i want to match if test & temp table contains same set of users, if not then display the extra number of users in either of the table.
In temp and test table only matching value could be email id.
How can i achieve this ?
Thank You
Andyyy
|
|
|
|
|
Try something like this:
select emailID,'Missing from B' from db_A.dbo.table1 where emailID not in
(select emailID from db_B.dbo.table2)
UNION
select emailID,'Missing from A' from db_B.dbo.table2 where emailID not in
(select emailID from db_A.dbo.table1)
That should show you the differences.
|
|
|
|
|
Hi,
When a large transaction is updating thousands of rows in a table, other transactions that wants to insert/update data into this table waits until the former transaction is finished.
(Please verify if my understanding is incorrect).
I'm saying this in the context of web application with multiple functions updating the same table, some of which are "bulk updates" which users sometimes use to update a large amount of data which takes a good 30 minutes to run. In this case, all other users are stuck and could not use functions that updates the same table.
Is there a way to solve this problem (other than reducing the 30-min runtime)?
Thanks!
Rafferty
|
|
|
|
|
Is it essential for these bulk updates to run as a single transaction? If not, then your solution could be to make these transactions smaller.
Bob
Ashfield Consultants Ltd
Proud to be a 2009 Code Project MVP
|
|
|
|
|
Hi Bob,
Thanks for your reply.
Is this the only way? The effort to do this is currently high, the table contains a trigger which will need to somehow be made "smaller" as well.
Rafferty
Rafferty
|
|
|
|
|
Its the only way I could think of.
Basically, any transaction will lock the table to other updates, so the only way is to use small transactions. I would be very reluctant to implement something that has a 30 minute transaction anyway - consider the imapct of a failure after 29 minutes. The transaction will have to rollback completely before any further updates can be done, and that would probably be close to another 30 minutes.
Bite the bullet and do it right - it will pay off in the long run
Bob
Ashfield Consultants Ltd
Proud to be a 2009 Code Project MVP
|
|
|
|
|
Hi Bob,
I'm trying to do this by implementing the smaller transactions in C#. However, I realize that I still encapsulated everything in transaction.BeginTransaction() and transaction.Commit() statements.
My question is, does this do the trick?
Or is it better to do some sort of "checking" at the beginning first (which checks for possible errors) and then do each update in separate transactions?
This is tricky because the the function that I'm working on updates a tree of objects (parent-child relationships) which may contain thousands of nodes.
Thanks again for your help.
Rafferty
|
|
|
|
|
Its hard to say without knowing more about your data structure, but if you have a tree of objects to update then each parent item needs to be a transaction, so it all works or all fails and rolls back. Think of it in terms of atomic transactions i.e. each transaction is a single unit of work, no matter how many tables are impacted.
Bob
Ashfield Consultants Ltd
Proud to be a 2009 Code Project MVP
|
|
|
|
|
I see.
Basically my test data contains a single parent with children, grand children, great grand children (up to the 6th level) totaling 1200 nodes.
2 tables are affected: The main table where the data is contained and another table which just contains the parent/child ids relationships. Only the main table is updated.
The previous code generates the tree, traverses and updates each node using a stored procedure.
I now just moved the tree generation and traversal to C# code and keep a simple stored procedure to update each data.
Then I enclosed the entire thing in a single transaction (using the SqlTransaction class).
Any comments about this?
My concern is that, since it is a single transaction... I may still experience the same problem that we currently have, which is that when the data is updating, other users who are updating data in the same SQL table are stuck until this finishes updating.
Thanks again.
Rafferty
|
|
|
|
|
You may already be doing this or something more sophisticated... Depending on how you are loading the data and if you can check constraints/triggers without database access, you may be able to drop constraints and triggers temporarily. Then afterwards you reapply the constraints and triggers. This may be faster and also let you recover from certain errors as you would log only those rows that you could not load.
|
|
|
|