Click here to Skip to main content
15,881,204 members
Please Sign up or sign in to vote.
4.00/5 (1 vote)
See more:
HI All,
Recently my team done a project on asp.net c# . It is to add the features to the exisiting project which is done by some other company.

Its a shopping cart. When the user places an order in very rare cases we are running into dead lock situation and the Database connections are being completely lost which are established through entripise library

Error : Transaction (Process ID 89) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.

So we searched for various reasons for the occurance of dead lock finally we came to know the exact reaosn for the dead lock

Reason :

DBWrapper wrapper = new DBWrapper(true,System.Data.IsolationLevel.RepeatableRead);

Using of IsolationLevel.RepeatableRead causing the dead lock issue.

So I have bit poor knowledge on Locks concept in sql.

Expected sloutions:

1) Plan is to change the isolation level which will be similar behaviour to IsolationLevel.RepeatableRead and prevent dead lock occurance.


2) Or else keeping the same isolation level in the code and make sql query with any lock
concept and make it work successful.


Please its a high priority issue for us... Your thoughts will be highly appreciated.

Please suggest me what to do for my expected solution .
Posted
Updated 28-Apr-15 23:34pm
v3
Comments
Herman<T>.Instance 29-Apr-15 6:14am    
What is the used query/view/stored procedure?
Did you run a profiler during the sessions?

1 solution

This comment was long and may answer your question:

IsolationLevel.RepeatableRead is very restrictive. It ensures that any data is locked at the point of being read. Any select transactions you have will also lock the row until the transaction has been completed.
so (A) reads, (B) tried to read but deadlocks, (A) completes. (B) will be discarded as it holds fewer resources.

I prefer IsolationLevel.Snapshot when I have larger transactions. It's risky but it work in my case:
(A) reads, (B) reads, (A) writes, (B) is unaware that the data has changes in the mean time so (B) writes over it.

I will almost never have two transactions writing to the same rows by design. Where they can do this is not critical and I'm happy to let one overwrite the other. I will often read from rows that are being rapidly updated but because I poll my data very often I don't mind if my data is out-of-date by a second or two.

Research the isolation levels:- Consider using specific isolation levels for specific transactions:- It may be that you require this isolation level in which case add a retry process.
 
Share this answer
 

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900