When I specify varchar(255), does SQL sever allocate  per cell? or does it allocate dynamically? If it statically allocate  bytes, then I don't see if there's much practical difference between char(255) and varchar(255) -- well, except varchar always store a one (or two) byte termination character internally.
If SQL server allocates 255 bytes when i specify varchar(255), then what about varchar(MAX)?
I did but not sure if I interpreted MSDN correctly. char [ ( n ) ]
Fixed-length, non-Unicode string data. n defines the string length and must be a value from 1 through 8,000. The storage size is n bytes. The ISO synonym for char is character.
varchar [ ( n | max ) ]
Variable-length, non-Unicode string data. n defines the string length and can be a value from 1 through 8,000. max indicates that the maximum storage size is 2^31-1 bytes (2 GB). The storage size is the actual length of the data entered + 2 bytes. The ISO synonyms for varchar are char varying or character varying.
varchar(255) --- this means SQL server will always allocate 255bytes? Or depending on actual values being inserted for a particular row?
If say row 1, "Name" = 25 bytes and row 2, "Name" = 225 bytes (say no other row longer than 225), then...
POSSIBILITY 1: SQL server always allocate 225 bytes (actual length of data, taken from row 2)
POSSIBILITY 2: SQL server always allocate 255 bytes for all rows as defined in column definition varchar(255)
POSSIBILITY 3: SQL Server allocate 25 bytes for row 1, and 225 bytes for row 2 (actual length of data vary from one row to the next)
My guess is - POSSIBILITY 3 is how SQL server actually behalves. This said, then why we bother specify varchar(n)? We should always specify varchar(MAX). Two reasons I can think of after a bit of digging...[^]
a. You cannot index anything varchar longer than 900 bytes
b. by MAX, you remove column max length validation provided for you at database level.
Allocation of storage is the primary difference, but it won't affect how you work with the data.
The main difference you'll have to deal with is in trailing SPACEs -- CHAR stores them, VARCHAR doesn't. Which can occasionally cause confusion, in that with VARCHAR 'BOB' and 'BOB ' will test equal and LEN report the same length; you'll want to use DATALENGTH for VARCHAR -- this sort of thing bit me again this week.
I think it's the opposite. CHAR don't store trailing termination char (2 bytes for SQL), varchar does.
I think I found the answer. With VARCHAR(MAX), you can't index the column and also you don't have length checks provided for you by default by database, that's why you should always use VARCHAR(n) as supposed to VARCHAR(MAX)
I think it's the opposite. CHAR don't store trailing termination char (2 bytes for SQL), varchar does
Not sure what that statement means.
The 'char' data type is fixed length. That means that there will ALWAYS be 255 characters. And the way normal APIs to databases handle that is that they pad shorter lengths with spaces. Pretty sure that is an ANSI requirement as well.
that's why you should always use VARCHAR(n) as supposed to VARCHAR(MAX)
Unless of course you know that you will be storing data larger than the largest integer value allowed for varchar(n).
CHAR(n) when lenght same for all/most rows.
In normal database design most columns will be varchar because
1. User data is almost always variable
2. Application data that is text almost always has a variable length.
I have a procedure that checks to see if a user added address exists in a remote database. If the address exists, the procedure retrieves the id and updates the local DB with that id. If it does not exist, the procedure adds the address to the remote database, retrieves the new id, and updates the local database with the new id. Unfortunately, without any apparent pattern some times several addresses are lumped together into one single new address id. Can someone give me a second pair of eyes on this procedure to see where it might be doing this? I am also open to improvements.
GO/****** Object: StoredProcedure [dbo].[usp_resolveAddressProblems] Script Date: 02/01/2013 09:51:58 ******/SET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGOSET NOCOUNT ON/****** Object: StoredProcedure [dbo].[usp_resolveAddressProblems] Script Date: 10/06/2009 12:41:21 ******/ALTERproc [dbo].[usp_resolveAddressProblems] (@servernamevarchar(100))
declare@intCaseIdintdeclare@remoteIdint = 0set@cursorString = 'declare myCursor cursor for select intAddressId, txtAddress, intCaseId from tbl_cdf_address where bitUserAdded = 1'exec(@cursorString)
fetch next from myCursor into@myAddId, @myAddress, @intCaseIdwhile@@FETCH_STATUS = 0begin
set@remoteId = 0set@query = N'select @remoteId = intAddressId from "' + @servername + '".depo.dbo.tbl_cdf_address where txtAddress = ''' + @myAddress + ''' and intCaseId = ' + CAST(@intCaseIdasvarchar(10))
exec sp_executesql @query, N'@remoteId int output', @remoteId output
if@remoteId != 0beginset@query = N'update tbl_cdf_observation set intAddressId = ' + cast(@remoteIdasvarchar(10)) + ' where intAddressId = ' + CAST(@myAddIdasvarchar(10)) + ' and intCaseId = ' + CAST(@intCaseIdasvarchar(10))
set@query = N'update tbl_cdf_inspectionNotes set intAddressId = ' + cast(@remoteIdasvarchar(10)) + ' where intAddressId = ' + CAST(@myAddIdasvarchar(10)) + ' and intCaseId = ' + CAST(@intCaseIdasvarchar(10))
print'Updated ' + @myAddressendelsebeginset@query = N'insert "' + @servername + '".depo.dbo.tbl_cdf_address (intCaseId, txtAddress, bitActive) values (' + CAST(@intCaseIdasvarchar(50)) + ',''' + @myAddress + ''',1)'exec(@query)
print'Added ' + @myAddress + ' to database.'goto get_address
endfetch next from myCursor into@myAddId, @myAddress, @intCaseIdendclose myCursor
set@query = N'delete tbl_cdf_address'exec(@query)
I have a field in the database that has credit card number in it. Bottom line, how can we encrypt this information using SQL SERVER? I need to encrypt it for a brief period of time, like 15 minutes, then the table is truncated.
Is there a way to check, before an insert if that particular order already exists in the table?
Yes; the primary key. That's the one that uniquely identifies a tupel/record. Hence, that's what you'd need to check. Most databases will do this automatic and throw an error if the record already exists.
if the process fails in the middle, and i want to start it again, if there are already some rows in the table, not to insert those.
Select a list of all primary key-values in the table, and skip those inserts.
Dude, if he is inserting from a file, then there is no primary key until it is inserted
Aah, good point. We don't show the Autoincrement-value to the user, so the user is using a combination of fields to uniquely identify a record. That used to be the primary-key, until we switched to artificial autoincrement-keys.
You're reading the file on a line to line basis? Don't want it in memory completely, because it'd have to be restarted completely if the process dies half way. It'd be an option to write the "current amount of processed records" to another file. If it crashes, read that file and see how many lines you can safely skip.
A transaction (as said below) is indeed the best idea
Also, it'd be wise to load the file in a separate table first, and move it from there to the required structure.
Wrap the process in a transaction. If it fails, it will get rolled back and the table will remain empty. At that point you can figure out what went wrong, take any remedial action and run the process again.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair. nils illegitimus carborundum
I was planning to put BEGIN TRANSACTION and COMMIT TRANSACTION, but what I am doing is reading from a file, inserting into the database, reading the next line and inserting into database. I have the insert statement in the C# page.
I've existing DB called A which contains 410 tables. I created another DB called B from the backup of A and modified the database B such as creating some new table and modifying existing tables and drop of some tables and so on. Now B database contains 548 tables.By the meantime A database as well undergone some changes. Say for example, in database A, in tableA some rows has been inserted. In database B, in tableA some rows has been deleted/modified.
I would like to know the data changes between the two databases.
My objective is that in order to sync the Database A with Database B I need to create some scripts. To identify the schema changes I've tool. No issue at that point. But I worry about DML changes. How to create DML scripts such as Insert, update, delete queries which make my Database A sync with Database B. How to achieve this?
Please do guide me.
Note: I'm using SQL SERVER 2008 R2.
Please note that I've downloaded the Red Gate's data compare tool which shows the difference but did not provide the script to sync it.
We regularly use Red-Gates data compare and it does supply the scripts to merge the data. It tends to have trouble with extensive data changes but for fairly small changes it does an excellent job. It allows you to save the scripts.
Never underestimate the power of human stupidity
Basically from Table1(COL1, COL2) i want to copy data to Table2(Col1,Col2)..while inserting, i want check if COL2 is having any specification ((QL) (ST) (PA)) accoringly flag should be updated with 1 to the respective _FLG column,if there is no specility (example : 5th row) all the _FLG columns should be updated with 0.
Please let me know if there are any related posts.