|
devvvy wrote: I think it's the opposite. CHAR don't store trailing termination char (2 bytes for SQL), varchar does
Not sure what that statement means.
The 'char' data type is fixed length. That means that there will ALWAYS be 255 characters. And the way normal APIs to databases handle that is that they pad shorter lengths with spaces. Pretty sure that is an ANSI requirement as well.
devvvy wrote: that's why you should always use VARCHAR(n) as supposed to VARCHAR(MAX)
Unless of course you know that you will be storing data larger than the largest integer value allowed for varchar(n).
devvvy wrote: CHAR(n) when lenght same for all/most rows.
In normal database design most columns will be varchar because
1. User data is almost always variable
2. Application data that is text almost always has a variable length.
|
|
|
|
|
I have a procedure that checks to see if a user added address exists in a remote database. If the address exists, the procedure retrieves the id and updates the local DB with that id. If it does not exist, the procedure adds the address to the remote database, retrieves the new id, and updates the local database with the new id. Unfortunately, without any apparent pattern some times several addresses are lumped together into one single new address id. Can someone give me a second pair of eyes on this procedure to see where it might be doing this? I am also open to improvements.
USE [RemoteCapture]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET NOCOUNT ON
ALTER proc [dbo].[usp_resolveAddressProblems] (@servername varchar(100))
as
declare @query nvarchar(650)
declare @myAddId int
declare @myAddress varchar(100)
declare @newId int
declare @cursorString varchar(2000)
declare @intCaseId int
declare @remoteId int = 0
set @cursorString = 'declare myCursor cursor for select intAddressId, txtAddress, intCaseId from tbl_cdf_address where bitUserAdded = 1'
exec(@cursorString)
open myCursor
fetch next from myCursor into @myAddId, @myAddress, @intCaseId
while @@FETCH_STATUS = 0
begin
get_address:
set @remoteId = 0
set @query = N'select @remoteId = intAddressId from "' + @servername + '".depo.dbo.tbl_cdf_address where txtAddress = ''' + @myAddress + ''' and intCaseId = ' + CAST(@intCaseId as varchar(10))
exec sp_executesql @query, N'@remoteId int output', @remoteId output
if @remoteId != 0
begin
set @query = N'update tbl_cdf_observation set intAddressId = ' + cast(@remoteId as varchar(10)) + ' where intAddressId = ' + CAST(@myAddId as varchar(10)) + ' and intCaseId = ' + CAST(@intCaseId as varchar(10))
exec(@query)
set @query = N'update tbl_cdf_inspectionNotes set intAddressId = ' + cast(@remoteId as varchar(10)) + ' where intAddressId = ' + CAST(@myAddId as varchar(10)) + ' and intCaseId = ' + CAST(@intCaseId as varchar(10))
exec(@query)
print 'Updated ' + @myAddress
end
else
begin
set @query = N'insert "' + @servername + '".depo.dbo.tbl_cdf_address (intCaseId, txtAddress, bitActive) values (' + CAST(@intCaseId as varchar(50)) + ',''' + @myAddress + ''',1)'
exec(@query)
print 'Added ' + @myAddress + ' to database.'
goto get_address
end
fetch next from myCursor into @myAddId, @myAddress, @intCaseId
end
close myCursor
deallocate myCursor
set @query = N'delete tbl_cdf_address'
exec(@query)
return
Cheers, --EA
|
|
|
|
|
Hi,
I have a field in the database that has credit card number in it. Bottom line, how can we encrypt this information using SQL SERVER? I need to encrypt it for a brief period of time, like 15 minutes, then the table is truncated.
I am using sql server 2005 within C# program/
|
|
|
|
|
This[^] topic describes how to encrypt a column of data by using symmetric encryption in SQL Server 2012 using Transact-SQL.
--edit
There's a dropdown at the top of the page that brings you to the same topic for Sql Server 2005[^].
|
|
|
|
|
Hi,
This is more for error proofing the flow of insert. Is there a way to check, before an insert if that particular order already exists in the table?
we have a job that runs by the hour, and on each hour starts with an empty table.
if the process fails in the middle, and i want to start it again, if there are already some rows in the table, not to insert those.
|
|
|
|
|
vanikanc wrote: Is there a way to check, before an insert if that particular order already exists in the table?
Yes; the primary key. That's the one that uniquely identifies a tupel/record. Hence, that's what you'd need to check. Most databases will do this automatic and throw an error if the record already exists.
vanikanc wrote: if the process fails in the middle, and i want to start it again, if there are already some rows in the table, not to insert those.
Select a list of all primary key-values in the table, and skip those inserts.
|
|
|
|
|
Dude, if he is inserting from a file, then there is no primary key until it is inserted
--
Aah, good point. We don't show the Autoincrement-value to the user, so the user is using a combination of fields to uniquely identify a record. That used to be the primary-key, until we switched to artificial autoincrement-keys.
You're reading the file on a line to line basis? Don't want it in memory completely, because it'd have to be restarted completely if the process dies half way. It'd be an option to write the "current amount of processed records" to another file. If it crashes, read that file and see how many lines you can safely skip.
A transaction (as said below) is indeed the best idea
Also, it'd be wise to load the file in a separate table first, and move it from there to the required structure.
|
|
|
|
|
Wrap the process in a transaction. If it fails, it will get rolled back and the table will remain empty. At that point you can figure out what went wrong, take any remedial action and run the process again.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair.
nils illegitimus carborundum
me, me, me
|
|
|
|
|
I was planning to put BEGIN TRANSACTION and COMMIT TRANSACTION, but what I am doing is reading from a file, inserting into the database, reading the next line and inserting into database. I have the insert statement in the C# page.
|
|
|
|
|
|
Really good answer mark. This will stop the problem from ocurring in the first place.
Chris Meech
I am Canadian. [heard in a local bar]
In theory there is no difference between theory and practice. In practice there is. [Yogi Berra]
posting about Crystal Reports here is like discussing gay marriage on a catholic church’s website.[Nishant Sivakumar]
|
|
|
|
|
Thanks.
"If you think it's expensive to hire a professional to do the job, wait until you hire an amateur." Red Adair.
nils illegitimus carborundum
me, me, me
|
|
|
|
|
If you are using SQL server 2008 look at the MERGE command. It will Update or Insert.
As to the failure, as stated look at transactions or possibly Try Catch.
|
|
|
|
|
Thank you.
I am using the begin transaction/commit or rollback in catch block.
Only thing, you have to specify the transaction as part of the sqlcommand object.
|
|
|
|
|
You can use the begin tran or try catch within the T-SQL. If you raise an error on failure you can then have C# rerun the data if that is what you want.
|
|
|
|
|
I've existing DB called A which contains 410 tables. I created another DB called B from the backup of A and modified the database B such as creating some new table and modifying existing tables and drop of some tables and so on. Now B database contains 548 tables.By the meantime A database as well undergone some changes. Say for example, in database A, in tableA some rows has been inserted. In database B, in tableA some rows has been deleted/modified.
I would like to know the data changes between the two databases.
My objective is that in order to sync the Database A with Database B I need to create some scripts. To identify the schema changes I've tool. No issue at that point. But I worry about DML changes. How to create DML scripts such as Insert, update, delete queries which make my Database A sync with Database B. How to achieve this?
Please do guide me.
Note: I'm using SQL SERVER 2008 R2.
Please note that I've downloaded the Red Gate's data compare tool which shows the difference but did not provide the script to sync it.
|
|
|
|
|
We regularly use Red-Gates data compare and it does supply the scripts to merge the data. It tends to have trouble with extensive data changes but for fairly small changes it does an excellent job. It allows you to save the scripts.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
In Red-gates data compare, please let me know how to get the scripts to merge data?
|
|
|
|
|
Hi,
Could some one help me with loading the TABLE2 like the way it is shown below.
SELECT COL1, COL2 FROM TABLE1
COL1 COL2
1 Tier 1 (QL) (ST) (PA)
2 Tier 2 (QL) (ST)
3 Tier 1 (ST)
4 Tier 1 (ST) (PA)
5 Tier 1
-----------------------------------------------------
INSERT INTO TABLE2 (COL1,COL2,QTY_FLG,STP_THPY_FLG,PRIOR_AUTHN_FLG)
SELECT COL1,COL2, , , FROM TABLE1
------------------------------------------------------
SELECT * FROM TABLE2
COL1 COL2 QL_FLG ST_FLG PA_FLG
1 Tier 1 (QL) (ST) (PA) 1 1 1
2 Tier 2 (QL) (ST) 1 1 0
3 Tier 1 (ST) 0 1 0
4 Tier 1 (ST) (PA) 0 1 1
5 Tier 1 0 0 0
Basically from Table1(COL1, COL2) i want to copy data to Table2(Col1,Col2)..while inserting, i want check if COL2 is having any specification ((QL) (ST) (PA)) accoringly flag should be updated with 1 to the respective _FLG column,if there is no specility (example : 5th row) all the _FLG columns should be updated with 0.
Please let me know if there are any related posts.
Thanks
|
|
|
|
|
Looking at your example it is not possible to identify the col1/2 content. Assuming the fist number is not in col1 I would break it into 2 queries
The first query should group by the col1 and count the instances of the different specifications.
The second query should build the string based on that information.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Hi, Thank you so much for your reply.
Could you please find the table information clearly with each row and data separated with pipe delimeter (|).
As you suggested, Group the column by Col1.. As the Col1 Information is unique.. i think it will give the same data even after grouping.
SELECT COL1, COL2 FROM TABLE1
COL1|COL2
1| Tier 1 (QL) (ST) (PA)
2| Tier 2 (QL) (ST)
3| Tier 1 (ST)
4| Tier 1 (ST) (PA)
5| Tier 1
-----------------------------------------------------
INSERT INTO TABLE2 (COL1,COL2,QL_FLG,ST_FLG,PA_FLG)
SELECT COL1,COL2, , , FROM TABLE1
------------------------------------------------------
SELECT * FROM TABLE2
COL1| COL2| QL_FLG| ST_FLG| PA_FLG
1| Tier 1 (QL) (ST) (PA)| 1 |1| 1
2| Tier 2 (QL) (ST)| 1 |1| 0
3| Tier 1 (ST)| 0 |1 |0
4| Tier 1 (ST) (PA)| 0| 1| 1
5| Tier 1 |0 |0 |0
|
|
|
|
|
Ok so this is not structured data it is rubbish, you are going to have to parse col2 of table1 into proper data. Then create a normalised table where you have 3 records for 1|Tier1.
After that it is a positional or possibly a pivot.
OR
You can create a cursor spit to process each record into the desired table2 format by parsing the col2 of table1.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
HI, Thank you for the reply.
Actaully the data is pretty much structered... since i cannot able to show the data here with proper horizantal and vertical bars,i kept Pipe delimeter so that data of each column will be separate from each other.
My source table is having rows in arround 500 - 600K using a cursor will be costly in performence point of view.
Could u please suggest if there is any other alternative.
|
|
|
|
|
HI, this is how it worked for me
INSERT INTO TABLE2 (COL1,
COL2,
QL_FLG,
ST_FLG,
PA_FLG)
(SELECT COL1,
COL2,
DECODE(INSTR(COL2,'QL'),0,0,1) QL_FLG,
DECODE(INSTR(COL2,'ST'),0,0,1) ST_FLG,
DECODE(INSTR(COL2,'PA'),0,0,1) PA_FLG
FROM TABLE1);
Thank you so much for your help.
|
|
|
|
|
any one can give me example about comlpex data binding with access database and its tables (bind combo box to access database tables)
|
|
|
|
|