-- DELETE, TRUNCATE and DROP Statements
The DELETE command is used to remove rows from a table.
A WHERE clause can be used to only remove some rows.
If no WHERE condition is specified, all rows will be removed.
After performing a DELETE operation you need to COMMIT or ROLLBACK the transaction to make the change permanent or to undo it.
Note that this operation will cause all DELETE triggers on the table to fire.
TRUNCATE removes all rows from a table.
The operation cannot be rolled back and no triggers will be fired.
As such, TRUCATE is faster and doesn't use as much undo space as a DELETE.
The DROP command removes a table from the database.
All the tables' rows, indexes and privileges will also be removed.
No DML triggers will be fired. The operation cannot be rolled back.
--Difference between TRUNCATE and DELETE commands
1) TRUNCATE is a DDL command whereas DELETE is a DML command.
2) TRUNCATE is much faster than DELETE.
Reason:When you type DELETE.all the data get copied into the Rollback Tablespace first.then delete operation get performed.Thatswhy when you type ROLLBACK after deleting a table ,you can get back the data(The system get it for you from the Rollback Tablespace).All this process take time.But when you type TRUNCATE,it removes data directly without copying it into the Rollback Tablespace.
Thatswhy TRUNCATE is faster.Once you Truncate you cann't get back the data.
3) You cann't rollback in TRUNCATE but in DELETE you can rollback.
TRUNCATE removes the record permanently.
4) In case of TRUNCATE ,Trigger doesn't get fired.
But in DML commands like DELETE .Trigger get fired.
5) You cann't use conditions(WHERE clause) in TRUNCATE.
But in DELETE you can write conditions using WHERE clause
INSERTINTO Person VALUES
SET Gender = CASE Gender WHEN'M'THEN'F'ELSE'M'ENDSELECT *
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
We're currently implementing a warehouse management system with some customizations. One task is to plan the work in the warehouse by releasing waves of picking activities. The release is handled by a stored procedure. This stored procedure generates data on the amount of work that needs to be done, number of containers that are required, etc.. The planner needs to know this information in order to release the appropriate waves.
So, in order to capture the required data, we have another stored procedure which releases all waves that are not yet released and rolls back the transaction once it's done. This roll-back is a guarantee. The problem is that during this 'simulation' other queries are blocked, which creates a performance problem.
Is there a way to run the 'simulation' without blocking all other queries?
I imported data into a Ms Access 2013 from SQL and all the data is there. I created a Query and also created a report that will use the Query. So the Query returns all the data. There are around 590 000 records that needs to be displayed by the record and records that are returned by the Query. when i do a Print Preview it gives me
Page 32767 of 12224
So i am afraid to start printing because from this look it seems like it does not show all data, but my assumption is that Print Preview cant show all the data , even if i try to go to the last page, can anyone clear my assumption.
Spoted in Daniweb-- Sorry to rant. I hate websites. They are just wierd. They don't behave like normal code.
There are around 590 000 records...So i am afraid to start printing
You should be.
Why exactly do you think that you should print something that is probably going to be about 6000 pages (at 100 records per page).
Who has told you that they are going to read this? Have you suggested to them what 6000 pages represents? Perhaps if you put a box (10 x 500 packages) on their desk and ask them what they planned to do with that then you might start a conversation that would lead to reports that are actually useful.
Other than that I certainly wouldn't attempt to print that at one go. If it messes up in the middle you have to start the entire job over. So select 400 pages at a time and print.
Might tell them you need your own printer too as it is going to tie it up for quite a while.
Thank you for your reply. i will like to reserve my harsh comment for the other reply that i received after you , but i will politely reply to your question.
Who has told you that they are going to read this?
This is a book called the valuation roll , everyone comes and read this book because it tells a person how much they are going to pay in taxes ,for sure more than a million people are going to read the book.
Have you suggested to them what 6000 pages represents? Perhaps if you put a box (10 x 500 packages) on their desk and ask them what they planned to do with that then you might start a conversation that would lead to reports that are actually useful.
I have now decided to split it to 30000 to be on the safe side and created temp tables with the data sequential and its fine now thanks , what i learned is that Access Report will never output more than "32767", so basically in a Page you would have 10 records and if you take 590 000 and Divide by 10 it gives more 30000 Pages, so that is when i realized where and how to fix the problem.
Your reply was still going to point me to the direction of Splitting the data.
Spoted in Daniweb-- Sorry to rant. I hate websites. They are just wierd. They don't behave like normal code.
Your "harsh reply" may be justified. I should have been somewhat more diplomatic. However you say more than a million people are going to read this tome, astonishing, no really I will be absolutely astounded if a million people read that book.
Some may reference the book and here is where I should have been more sensitive, I automatically assure that referencing such a vast amount of data would only be done electronically, this may not be an option for some!
Still the viability of printing 1000s of pages via Access stikes me as using the wrong tool for the job. Anything lees than a professional printing solution would be the wrong tool!
I am mightily impressed that you did not resort to down voting the response
Never underestimate the power of human stupidity
Requirements in Software Development or in any industry might sometimes overcome
the basic understanding of what makes sense to you.
That it true to a limited extent.
1. That doesn't mean that the requirements are always right.
2. That doesn't mean that the requirements are always complete.
3 That doesn't mean that the requirement are always understood.
4 That doesn't mean that the requirements are always possible nor logical.
A professional developer must be willing and capable of recognizing when there is a flaw in the requirements and tell the business people what those are so together they can work towards a solution that meets the business needs.
And as a guess in terms of this posting, for what you posted, I would suppose that #2 in the above was the problem. Since presumably you already knew you needed to produce indivdiual pages in an electronic form and not just one massive single paper report.
At least I am hoping that is what your other post meant.
I have a problem with my application within Win server 2008
, when it is work well within Win server 2003
when I try to use my app with win_ser2008 get the following error message:
Server Error in '/yyy' Application.
ORA-00161: طول فرع الحركة 90 غير مشروع (العدد الأقصى المسموح64 )
Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.
Exception Details: System.Data.OracleClient.OracleException: ORA-00161: طول فرع الحركة 90 غير مشروع (العدد الأقصى المسموح64 )
An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.
When I specify varchar(255), does SQL sever allocate  per cell? or does it allocate dynamically? If it statically allocate  bytes, then I don't see if there's much practical difference between char(255) and varchar(255) -- well, except varchar always store a one (or two) byte termination character internally.
If SQL server allocates 255 bytes when i specify varchar(255), then what about varchar(MAX)?
I did but not sure if I interpreted MSDN correctly.
char [ ( n ) ]
Fixed-length, non-Unicode string data. n defines the string length and must be a value from 1 through 8,000. The storage size is n bytes. The ISO synonym for char is character.
varchar [ ( n | max ) ]
Variable-length, non-Unicode string data. n defines the string length and can be a value from 1 through 8,000. max indicates that the maximum storage size is 2^31-1 bytes (2 GB). The storage size is the actual length of the data entered + 2 bytes. The ISO synonyms for varchar are char varying or character varying.
varchar(255) --- this means SQL server will always allocate 255bytes? Or depending on actual values being inserted for a particular row?
If say row 1, "Name" = 25 bytes and row 2, "Name" = 225 bytes (say no other row longer than 225), then...
POSSIBILITY 1: SQL server always allocate 225 bytes (actual length of data, taken from row 2)
POSSIBILITY 2: SQL server always allocate 255 bytes for all rows as defined in column definition varchar(255)
POSSIBILITY 3: SQL Server allocate 25 bytes for row 1, and 225 bytes for row 2 (actual length of data vary from one row to the next)
My guess is - POSSIBILITY 3 is how SQL server actually behalves. This said, then why we bother specify varchar(n)? We should always specify varchar(MAX). Two reasons I can think of after a bit of digging...[^]
a. You cannot index anything varchar longer than 900 bytes
b. by MAX, you remove column max length validation provided for you at database level.
Allocation of storage is the primary difference, but it won't affect how you work with the data.
The main difference you'll have to deal with is in trailing SPACEs -- CHAR stores them, VARCHAR doesn't. Which can occasionally cause confusion, in that with VARCHAR 'BOB' and 'BOB ' will test equal and LEN report the same length; you'll want to use DATALENGTH for VARCHAR -- this sort of thing bit me again this week.
I think it's the opposite. CHAR don't store trailing termination char (2 bytes for SQL), varchar does.
I think I found the answer. With VARCHAR(MAX), you can't index the column and also you don't have length checks provided for you by default by database, that's why you should always use VARCHAR(n) as supposed to VARCHAR(MAX)
I think it's the opposite. CHAR don't store trailing termination char (2 bytes for SQL), varchar does
Not sure what that statement means.
The 'char' data type is fixed length. That means that there will ALWAYS be 255 characters. And the way normal APIs to databases handle that is that they pad shorter lengths with spaces. Pretty sure that is an ANSI requirement as well.
that's why you should always use VARCHAR(n) as supposed to VARCHAR(MAX)
Unless of course you know that you will be storing data larger than the largest integer value allowed for varchar(n).
CHAR(n) when lenght same for all/most rows.
In normal database design most columns will be varchar because
1. User data is almost always variable
2. Application data that is text almost always has a variable length.
I have a procedure that checks to see if a user added address exists in a remote database. If the address exists, the procedure retrieves the id and updates the local DB with that id. If it does not exist, the procedure adds the address to the remote database, retrieves the new id, and updates the local database with the new id. Unfortunately, without any apparent pattern some times several addresses are lumped together into one single new address id. Can someone give me a second pair of eyes on this procedure to see where it might be doing this? I am also open to improvements.
GO/****** Object: StoredProcedure [dbo].[usp_resolveAddressProblems] Script Date: 02/01/2013 09:51:58 ******/SET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGOSET NOCOUNT ON/****** Object: StoredProcedure [dbo].[usp_resolveAddressProblems] Script Date: 10/06/2009 12:41:21 ******/ALTERproc [dbo].[usp_resolveAddressProblems] (@servername varchar(100))
asdeclare @query nvarchar(650)
declare @myAddId intdeclare @myAddress varchar(100)
declare @newId intdeclare @cursorString varchar(2000)
declare @intCaseId intdeclare @remoteId int = 0set @cursorString = 'declare myCursor cursor for select intAddressId, txtAddress, intCaseId from tbl_cdf_address where bitUserAdded = 1'exec(@cursorString)
fetch next from myCursor into @myAddId, @myAddress, @intCaseId
while @@FETCH_STATUS = 0begin
set @remoteId = 0set @query = N'select @remoteId = intAddressId from "' + @servername + '".depo.dbo.tbl_cdf_address where txtAddress = ''' + @myAddress + ''' and intCaseId = ' + CAST(@intCaseId asvarchar(10))
exec sp_executesql @query, N'@remoteId int output', @remoteId output
if @remoteId != 0beginset @query = N'update tbl_cdf_observation set intAddressId = ' + cast(@remoteId asvarchar(10)) + ' where intAddressId = ' + CAST(@myAddId asvarchar(10)) + ' and intCaseId = ' + CAST(@intCaseId asvarchar(10))
set @query = N'update tbl_cdf_inspectionNotes set intAddressId = ' + cast(@remoteId asvarchar(10)) + ' where intAddressId = ' + CAST(@myAddId asvarchar(10)) + ' and intCaseId = ' + CAST(@intCaseId asvarchar(10))
print'Updated ' + @myAddress
endelsebeginset @query = N'insert "' + @servername + '".depo.dbo.tbl_cdf_address (intCaseId, txtAddress, bitActive) values (' + CAST(@intCaseId asvarchar(50)) + ',''' + @myAddress + ''',1)'exec(@query)
print'Added ' + @myAddress + ' to database.'goto get_address
endfetch next from myCursor into @myAddId, @myAddress, @intCaseId
set @query = N'delete tbl_cdf_address'exec(@query)
I have a field in the database that has credit card number in it. Bottom line, how can we encrypt this information using SQL SERVER? I need to encrypt it for a brief period of time, like 15 minutes, then the table is truncated.
Is there a way to check, before an insert if that particular order already exists in the table?
Yes; the primary key. That's the one that uniquely identifies a tupel/record. Hence, that's what you'd need to check. Most databases will do this automatic and throw an error if the record already exists.
if the process fails in the middle, and i want to start it again, if there are already some rows in the table, not to insert those.
Select a list of all primary key-values in the table, and skip those inserts.
Dude, if he is inserting from a file, then there is no primary key until it is inserted
Aah, good point. We don't show the Autoincrement-value to the user, so the user is using a combination of fields to uniquely identify a record. That used to be the primary-key, until we switched to artificial autoincrement-keys.
You're reading the file on a line to line basis? Don't want it in memory completely, because it'd have to be restarted completely if the process dies half way. It'd be an option to write the "current amount of processed records" to another file. If it crashes, read that file and see how many lines you can safely skip.
A transaction (as said below) is indeed the best idea
Also, it'd be wise to load the file in a separate table first, and move it from there to the required structure.
Wrap the process in a transaction. If it fails, it will get rolled back and the table will remain empty. At that point you can figure out what went wrong, take any remedial action and run the process again.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. nils illegitimus carborundum
I was planning to put BEGIN TRANSACTION and COMMIT TRANSACTION, but what I am doing is reading from a file, inserting into the database, reading the next line and inserting into database. I have the insert statement in the C# page.