Oh yes, it does ensure that, but perhaps too much: it turns out to create a defect opposite to the non-serialized access: the possibility of
deadlock :-).
This code cannot be analyzed along, without the rest of the code, some the parts in other thread(s) awaiting on the same object and locking on the same lock object (which always should exist, otherwise the lock would be totally pointless). But as it is,
it is a well-prepared catch for a deadlock. Imaging that you have a thread that it is supposed to signal the condition
readers
. If this does not happen, the thread running the fragment shown in this code won't get waken up. Now, imaging that signalling the condition happens in the another locked fragment of code, and it is locked with the same
lock
object. If the
lock
call by that thread happens when the thread executing the fragment you show is already awaiting, you will get two threads infinitely waiting for each other.
Worse, in certain environment, the play of probabilities may lead to the situation when the locked fragments of code I described will be executed by both threads in some sequentially order, just be shear coincidence, for, say, whole year of runtime, and, on next year, it may eventually run into the deadlock. This is not a joke: I can easily design the demonstration of such effect when the probability of deadlock can be made as small as any preliminary set value, and yet not 100% (one way to create such situation is the chained deadlock called the "
problem of five dining philosophers", and the probability can be tuned using some delays). Again, it all depends on
what else is written in your code. The fragment of code shown is
potentially dangerous. Not only it is suspicious, but it looks like a part of the very
familiar deadlock pattern I actually saw in some software products I had to inherit and fix/replace: a wait inside mutually excluded area.
You should understand one thing: both
await
and
lock
mean conditional state transition of a thread into a special "wait state", when a thread is switched off and not scheduled back to execution until it is waken up by some event, such as release of the lock by other thread, signalling a condition, timeout or abort. As soon as you are trying to "protect" the access to one synchronization primitive by another synchronization primitive, you are creating some trouble.
Please don't ask me how to "fix" it. There is nothing to fix, but this code makes no sense. I just don't know your goals, so the whole thing is not a valid question. You need to design your code very thoroughly and proof that it does not run into deadlock or, say, race condition, the same way as mathematical theorems are proven: using precise logical speculation. The prove should not be based on the consideration of all possible variants (which is however possible for very simple problems), but you should come to this conclusion using logical reasoning. My example with the year without an actual deadlock should explain that you cannot depend just on testing. One of the analysis method is based on
Petri net formalism.
At the same time, there are many simple problems, no, even classes of problems (they can be very complex but simple in this aspect) where the analysis can be easily done just by looking at the code. One simple example is: only locks are used and locks are strictly nested (in particular, it's also important to release all locks on all exceptions). Such models can be good or bad in terms of performance (many people heavily over-synchronize the applications without any useful effect of it, but this is a separate topic), but they never cause deadlocks. I know developers who use only the common sense and simple models of threading and never have problems with it.
Please see:
http://en.wikipedia.org/wiki/Deadlock[
^],
http://en.wikipedia.org/wiki/Race_condition[
^],
http://en.wikipedia.org/wiki/Thread_synchronization[
^],
http://en.wikipedia.org/wiki/Dining_philosophers_problem[
^],
http://en.wikipedia.org/wiki/Petri_net[
^].
—SA