Click here to Skip to main content
15,883,901 members
Please Sign up or sign in to vote.
3.00/5 (1 vote)
C#
lock.lock(); //lock
if(nReaders > 0){

readers.await(); }//await

nReaders++;



...


nReaders--;

readers.signal(); //signal
lock.unlock(); //lock


I tried using it so that my global variable stays consistent throughout the execution of my program, but it doesn't work for some reason. What might be the problem?
Posted

1 solution

Oh yes, it does ensure that, but perhaps too much: it turns out to create a defect opposite to the non-serialized access: the possibility of deadlock :-).

This code cannot be analyzed along, without the rest of the code, some the parts in other thread(s) awaiting on the same object and locking on the same lock object (which always should exist, otherwise the lock would be totally pointless). But as it is, it is a well-prepared catch for a deadlock. Imaging that you have a thread that it is supposed to signal the condition readers. If this does not happen, the thread running the fragment shown in this code won't get waken up. Now, imaging that signalling the condition happens in the another locked fragment of code, and it is locked with the same lock object. If the lock call by that thread happens when the thread executing the fragment you show is already awaiting, you will get two threads infinitely waiting for each other.

Worse, in certain environment, the play of probabilities may lead to the situation when the locked fragments of code I described will be executed by both threads in some sequentially order, just be shear coincidence, for, say, whole year of runtime, and, on next year, it may eventually run into the deadlock. This is not a joke: I can easily design the demonstration of such effect when the probability of deadlock can be made as small as any preliminary set value, and yet not 100% (one way to create such situation is the chained deadlock called the "problem of five dining philosophers", and the probability can be tuned using some delays). Again, it all depends on what else is written in your code. The fragment of code shown is potentially dangerous. Not only it is suspicious, but it looks like a part of the very familiar deadlock pattern I actually saw in some software products I had to inherit and fix/replace: a wait inside mutually excluded area.

You should understand one thing: both await and lock mean conditional state transition of a thread into a special "wait state", when a thread is switched off and not scheduled back to execution until it is waken up by some event, such as release of the lock by other thread, signalling a condition, timeout or abort. As soon as you are trying to "protect" the access to one synchronization primitive by another synchronization primitive, you are creating some trouble.

Please don't ask me how to "fix" it. There is nothing to fix, but this code makes no sense. I just don't know your goals, so the whole thing is not a valid question. You need to design your code very thoroughly and proof that it does not run into deadlock or, say, race condition, the same way as mathematical theorems are proven: using precise logical speculation. The prove should not be based on the consideration of all possible variants (which is however possible for very simple problems), but you should come to this conclusion using logical reasoning. My example with the year without an actual deadlock should explain that you cannot depend just on testing. One of the analysis method is based on Petri net formalism.

At the same time, there are many simple problems, no, even classes of problems (they can be very complex but simple in this aspect) where the analysis can be easily done just by looking at the code. One simple example is: only locks are used and locks are strictly nested (in particular, it's also important to release all locks on all exceptions). Such models can be good or bad in terms of performance (many people heavily over-synchronize the applications without any useful effect of it, but this is a separate topic), but they never cause deadlocks. I know developers who use only the common sense and simple models of threading and never have problems with it.

Please see:
http://en.wikipedia.org/wiki/Deadlock[^],
http://en.wikipedia.org/wiki/Race_condition[^],
http://en.wikipedia.org/wiki/Thread_synchronization[^],
http://en.wikipedia.org/wiki/Dining_philosophers_problem[^],
http://en.wikipedia.org/wiki/Petri_net[^].

—SA
 
Share this answer
 
v3
Comments
Member 10338805 17-Oct-13 21:39pm    
it's strange though, because I get data inconsistencies despite that, I am even using synchronized getter and setter for my global variable and my global variable still ends up being inconsistent.
Sergey Alexandrovich Kryukov 17-Oct-13 23:32pm    
"I even synchronize..." does not sound good. It looks like you feel that synchronization is like a resource, food or fuel: the more the better. This is not so, by far. The whole way of thinking is wrong. First of all, best synchronization is no synchronization. It should really be minimized. One principle moment is this: you should minimize shared objects and try to do nearly all calculation on stack, as each thread, of course, works in its own stack. Synchronization is really unavoidable, but you can minimize it. It looks like you firmly stay on the grounds of trial and error. It may works in other areas, but not in threading.

In simple case, you should simply understand some patterns and use them. One pattern (nested locks, locks only) I already described, it is applied on some shared objects. Another one is producer-consumer, presumably using the mechanism of blocking queue...

—SA

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900