|
In your version if the second insert fails, then an orphaned title has been added without its associated author.
|
|
|
|
|
You are right, I just figured it out
Thank you
_
|
|
|
|
|
I guess I found my answer, it's useful when the first statement succeeds and the second one doesn't. At that time we need to rollback the first successful statement.
|
|
|
|
|
There are several considerations in both examples. Some of them are:
1. Transaction usage and scope: In the first example transaction is started and ended inside stored proc. Personally I don't feel this is a good way to handle transactions. Consider what would happen if the real world situation involves call to two stored procs. If both of them autonomously end transaction, there's no quarantee that the result is correct. This is why I always start and end transactions at calling side.
2. Why rollback if first statement fails: Even though the insert itself fails, transaction is started and DML is executed. This has resulted to resource usage. The only way to free those resources is to roll back the transaction (since there's no point to commit a failure).
So basically I would prefer the second version of the stored proc, but I also would ensure that it's executed inside a transaction. This would make it possible to have larger scope transactions and also to retry failed operations inside a transaction which is a very elemental requirement in certain system types.
Mika
The need to optimize rises from a bad design.
My articles[ ^]
|
|
|
|
|
Re: your point 1
The only time I've written stored procedures was when I was using ad hoc ODBC statements (about ten years ago) and had no other way to do transactions.
Currently, with ADO.net, I can do transactions in my program so I don't use stored procedures at all.
However, a properly designed system of stored procedures should provide everything the application needs, including transactioning.
The application shouldn't need to know about the details of the schema and what the stored procedures are doing, and therefore should be able to just execute the stored procedures without thought to whether or not to use transactions.
|
|
|
|
|
I think there are several factors that affect the choice how transactions should be used.
In my example I used two separate procedures that are executed in a sequence. If the execution as a whole should have ACID properties, the must be wrapped inside a common transaction. Basically this gives two options:
1. create a new stored procedure combining both calls thus leading to single call to backend
2. use transaction with a scope over two different SQL calls
In point 1, transaction problem is solved (in a simple case) but also the backend now knows (at least partly) the business logic (order of operations and the sequence). This may or may not be acceptable. The problem becomes more difficult if the procedures are not located in the same backend
In point 2, we would need a higher level transaction manager (such as classes in System.Transactions namespace or even self made mechanism). This adds extra 'overhead' to the programming but also it's possible to have more resources inside the same transactions. If this is needed, is again depending on the requirements of the system.
One good link that came in mind: http://blogs.msdn.com/diegumzone/archive/2006/08/14/699219.aspx[^]
The need to optimize rises from a bad design.
My articles[ ^]
|
|
|
|
|
I suppose two phase commit would be another reason to avoid stored procedures and just write the stuff in code.
|
|
|
|
|
Might be
The need to optimize rises from a bad design.
My articles[ ^]
|
|
|
|
|
Table tblMyTable has a field myXMLField with datatype xml
Question:
I would like to pull out the values from each record and then group them so that only shows the distict values.
How is this done please?
Thanks
This is what I have at present but it is not correct
select
stringList.value('(/value)')
from
tblMyTable
The records inside the xml field in the table is similar to:
record1 shows:
<stringList xmlns="http://schemas.myCompany.com/myProject/[app]/stringList/1.0">
<value>value1</value>
</stringList>
record2 shows:
NULL
record3 shows:
<stringList xmlns="http://schemas.myCompany.com/myProject/[app]/stringList/1.0">
<value>value5</value>
<value>value2</value>
</stringList>
record4 shows :
<stringList xmlns="http://schemas.myCompany.com/myProject/[app]/stringList/1.0">
<value>value4</value>
<value>value1</value>
<value>value5</value>
<value>value2</value>
</stringList>
RESULT:
value1
value2
value4
value5
|
|
|
|
|
I think there's some problems with the namespace. If you give it a name then query method works fine. For example:
DECLARE @x xml
SET @x = '<stringList xmlns<big>:A</big>="http://schemas.myCompany.com/myProject/[app]/stringList/1.0"><value>value1</value></stringList>'
SELECT @x.query('data(/stringList/value)'
The need to optimize rises from a bad design.
My articles[ ^]
|
|
|
|
|
Hi,
That does not solve the issue.
I have modified the sql as follows. The error says Syntax error near 'stringList'
Any thoughts please?
DECLARE @Sample TABLE
(
rowID INT IDENTITY(1, 1) PRIMARY KEY CLUSTERED,
[xml] XML
)
INSERT @Sample
SELECT '<stringList xmlns="http://schemas.myCompany.com/myProject/[app]/stringList/1.0"><value>value1</value></stringList>' UNION ALL
SELECT NULL UNION ALL
SELECT '<stringList xmlns="http://schemas.myCompany.com/myProject/[app]/stringList/1.0"><value>value5</value><value>value2</value></stringList>' UNION ALL
SELECT '<stringList xmlns="http://schemas.myCompany.com/myProject/[app]/stringList/1.0"><value>value4</value><value>value1</value><value>value5</value><value>value2</value></stringList>'
SELECT *
FROM @Sample
SELECT DISTINCT t.c.value('.', 'VARCHAR(20)')
FROM @Sample AS s
CROSS APPLY s.xml.nodes('/stringList xmlns="http://schemas.myCompany.com/myProject/[app]/stringList/1.0"/value') AS t(c)
|
|
|
|
|
That wasn't exactly my point.
Enclose field named xml in brackets (is that correct word for [ and ]?) like:
CROSS APPLY s.[xml].nodes...
Then if you run the query you have, you'll get empty results. But if you have an alias (like A) for the namespace, the query runs fine:
...xmlns:A="http:...
Also you don't have to use namespace in the xquery.
Here's the full code:
DECLARE @Sample TABLE
(
rowID INT IDENTITY(1, 1) PRIMARY KEY CLUSTERED,
[xml] XML
)
INSERT @Sample
SELECT '<stringList xmlns:A="http://schemas.myCompany.com/myProject/[app]/stringList/1.0"><value>value1</value></stringList>' UNION ALL
SELECT NULL UNION ALL
SELECT '<stringList xmlns:A="http://schemas.myCompany.com/myProject/[app]/stringList/1.0"><value>value5</value><value>value2</value></stringList>' UNION ALL
SELECT '<stringList xmlns:A="http://schemas.myCompany.com/myProject/[app]/stringList/1.0"><value>value4</value><value>value1</value><value>value5</value><value>value2</value></stringList>'
SELECT *
FROM @Sample
SELECT DISTINCT t.c.value('.', 'VARCHAR(20)')
FROM @Sample AS s
CROSS APPLY s.[xml].nodes('/stringList/value') AS t(c)
The need to optimize rises from a bad design.
My articles[ ^]
|
|
|
|
|
|
No problem.
The need to optimize rises from a bad design.
My articles[ ^]
|
|
|
|
|
i created a job to transfer data in one table to another table, it's working perfectly, but the problem is when data is transfering because of primary key vales it gives an error(primary key duplication), how can i insert data without getting an error
one possible thing is dalete every thing in the second table and then insert every thing in the first table to the 2nd.but i don't think that this is a good solution , i'm expecting a better solution.
|
|
|
|
|
If you want to update table2 from the matchuing records in table 1 and then insert any new ones
Update table2
set col1 = a.col1,.....
from table2 b, table1 a
where b.primarykeycol = a.primarykeycol
insert into table2
select col1,col2.... from table1 a
where not exists (select 1 from table2 b where b.primarykeycol = a.primarykeycol)
If you don't want to update table2 ignore the update.
Bob
Ashfield Consultants Ltd
|
|
|
|
|
thanx, but there is another problem, the two tables are in two servers ,assume-> table1 is in server A (sql 2000) and the table 2 in server B (sql 2005), so i want to do the transfer from server A to server B . i tried it with
insert into table_B (eid,ename,eaddr)
select r.eid,r.ename,r.eaddr
from [server A ip]\[sql instance]\[db]\table_A r
but get an error at the srver ip
|
|
|
|
|
You need to investigate LINKEDSERVERS. The alternative is to bcp the data out of table1 and into table2, but I guess you don't want to go that way.
Bob
Ashfield Consultants Ltd
|
|
|
|
|
How did you write the job in the first place?. I havent had much luck writing code to transfer data from a server to server. But if you have that ability then I can help with avoiding primary key duplication
|
|
|
|
|
i tried on the net and in MS site for this script, but the script was removed , can anyone tell me where i can find the script
|
|
|
|
|
|
tanx, i so it but i didn't try because i was desinged to sql 2000, at the moment i'm in the middle of my testing project i don't know if this is the same one which came with sql 2005.
|
|
|
|
|
As far as I am aware (however I could be wrong), SQL Server 2005 did not come bundled with the Northwind Database, nor has the actual Database changed since SQL Server 2000. I tend to just run the script on my 2005 instance, and it works fine with all sample code.
If you have any troubles using it, feel free to ask.
|
|
|
|
|
|
I am not too good with T-SQL...
I have a Temp Table with the following data:
Date Line WGT .... Form
XXX 1 126 .... ABC
XXX 1 124 .... ABC
XXX 1 126 .... EFG
XXX 2 126 .... ABC
XXX 2 123 .... EFG
XXX 8 124 .... EFG
XXX 8 126 .... EFG
XXX 8 127 .... HIJ
XXX 8 126 .... HIJ
...
I need to query this table as follows:
Select the Rows for Each 'Line' that have the lowest 'WGT' where the FORM value is distinct for that 'LINE'.
Does That make sense?
Output should be like:
DATE LINE WGT .... FORM
XXX 1 124 .... ABC
XXX 1 126 .... EFG
XXX 2 126 .... ABC
XXX 2 123 .... EFG
XXX 8 124 .... EFG
XXX 8 126 .... HIJ
...
I have tried every variation of distinct, min and group by that I can think of. I am thinking of resorting to UDF's but I know deep down in my inner Geek that this is simple. Any tips before I bore apall you with my pathetic T-SQL?
CODEZ nOT URGNT
I don't speak Idiot - please talk slowly and clearly
|
|
|
|