Click here to Skip to main content
15,867,756 members
Articles / General Programming / Threads

Synchronization, Thread Safety, and Performance

Rate me:
Please Sign up or sign in to vote.
4.92/5 (8 votes)
18 Feb 2013CPOL16 min read 49.5K   44   20
A walkthrough explanation of the issues and solutions to thread safe synchronization.

Introduction 

This article outlines the fools path to wisdom about thread safety and synchronization. Although I primarily code in C#, this article may be useful across languages and platforms. 

Contents

Background     

I've been spending a lot of time attempting to create classes which alleviate my need to worry about thread safety. This may seem like a fools ploy, but I feel like I've made some great strides in not only my code, but in my understanding.

I started by authoring a class to help me through these woes: Thread-Safe Synchronization Made Easier (which is partially obsolte after this article is published). Thanks to many knowledgeable participants in the comments, I was able to discover many issues, resolve most of them, and understand the remaining. 

If you are simply looking for a quick and dirty synchronization solution for .NET, then the System.Collections.Concurrent namespace has what you need most of the time. But there are plenty of cases where you are handed a collection that is not inherently thread safe and you need to work with a given object versus leveraging an already thread safe version. Some classes offer up synchronized versions which can help eliminate some worry. Like the .NET HashTable has a Synchronized member which is a wrapper. But in the end, it may not be synchronizing to your needs and is only really preventing errors from being thrown as you read and write to it.

Basic Synchronization 

We would all like for code to work like magic, but sometimes unforeseen problems show up. Concurrency is one of those headaches. If you've never coded for thread safety before, be weary that things can happen between the lines. Common operations under asynchronous conditions, are simply not thread safe.

C#
// Synchronus example:
dictionary.Add(key, newValue);
// ... Sometime later ...
object value = dictionary[key];

The above sample  when isolated to a local function with a local dictionary should work without any issues. But when presented with a high rate of repetition under asynchronous conditions, errors can occur quite often.

C#
// Thread A:
dictionary.Add(key, newValue);

// Thread B:
object value = dictionary[key];

Hopefully for most of us, when we look at the above sample we see that it is clearly not thread safe. Assuming that the add operation will fail and throw an exception if the value already exists and that the get by key operation will also fail if that key doesn't exits.... Let's try and fix this, step by step...

C#
// Thread A:
if(!dictionary.ContainsKey(key))
  dictionary.Add(key, newValue);

// Thread B:
object value = dictionary.ContainsKey(key) ?
  dictionary[key] : null; 

Our first attempt to correct the problem is the reaction to the exceptions. Attempting to use the ContainsKey method we hope to prevent the exceptions form happening and allow for the subsequent code to handle the conditions.

But right away, there's a problem. Thread A can fail often, simply because "between the lines" another thread has already added the value by the time the local add operation happens. Thread B on the other hand shouldn't throw, but the following code may be getting a null value even when a value actually exists.

So you say to yourself, "looks like I need to synchronize these operations."  Okay let's do that...

C#
// Thread A:
lock(dictionary)
  if(!dictionary.ContainsKey(key))
    dictionary.Add(key, newValue);

// Thread B:
object value;
lock(dictionary)
  value = dictionary.ContainsKey(key) ?
    dictionary[key] : null;  

Now this above sample is thread safe and very acceptably performant as long as these are the only operations occurring on the dictionary. But it seems heavy handed to lock the dictionary while reading. Aren't read operations inherently thread safe?  The answer is no. Sadly, almost always no. It is simply that most of the time, you can get away with it. Assuming that we've resolved any problems with adding new values, how can we make the above sample better?

C#
// Thread B:
object value = null;
dictionary.TryGetValue(key, out value);  

For Thread B, this looks better right?  Getting rid of the contains call and simply getting the value through the TryGetValue method of the dictionary should eliminate any concerns, right?  Well, yes and no. Yes, the lock while reading the value is gone which will increase performance but now it is possible for the add operation to occur in the midst of attempting to get the value.

The problem here is that we cannot be certain how this TryGetValue will behave when we are in the middle of a write operation.

If you got this far, you basically got lucky with this oversimplified scenario. In the case of a .NET Dictionary<TKey,TValue>, the documentation states that multiple reads are safe as long as there are NO writes. If a write occurs in the middle of multiple reads, it's possible that data corruption can occur. 

Sadly, for us to be sure, we have to do this...

C#
// Thread B:
object value = null;
lock(dictionary)
  dictionary.TryGetValue(key, out value); 

So far, it seems that the only real thread safe synchronization is locking on write and on read.
There's gotta be a better way right?  Let's dig deeper...  

Timeout! 

If you are dealing with isolated collections, then the lock keyword is enough for most cases. But as your progress into complexity, locking with a timeout is a safer, more revealing way to go. In my case, I almost always use a timeout value. Locks shouldn't last very long, and if you have a long running operation it is strongly advisable to have timeouts in order to reveal possible dead-locks or poor perfoming code.

C#
/// <summary>
/// Applies a lock on the syncObject before executing the provided Action with a timeout.
/// Throws a TimeoutException if throwsOnTimeout is true (default) and a lock could not be aquired.
/// </summary>
/// <param name="lockObject">Object used for synchronization.</param>
/// <param name="closure">The action to execute once a lock is acquired.</param>
/// <param name="millisecondsTimeout">Maximum time allowed to wait for a lock.</param>
/// <param name="throwsOnTimeout">If true and a timeout is reached, then a TimeoutException is thrown.
/// If false and a timeout is reached, then it this method returns false
/// and allows the caller to handle the failed lock.</param>
/// <returns>
/// True if a lock was acquired and the Action executed.
/// False if throwsOnTimeout is false and could not acquire a lock.
/// </returns>
public static bool Lock(object lockObject,
  Action closure, int millisecondsTimeout, bool throwsOnTimeout = true)
{
  Contract.Requires<ArgumentNullException>(lockObject != null);
  Contract.Requires<ArgumentNullException>(closure != null);
  Contract.Requires<ArgumentOutOfRangeException>(millisecondsTimeout >= 0);

  bool lockTaken = false;
  try
  {
    Monitor.TryEnter(lockObject, millisecondsTimeout, ref lockTaken);
    if (!lockTaken)
    {
      if (throwsOnTimeout)
        throw new TimeoutException("Could not gain a lock within the timeout specified.");

      return false;
    }

    closure();
  }
  finally
  {
    if (lockTaken) Monitor.Exit(lockObject);
  }

  return true;
}

The above static function can be used to simplify the process of locking with a timeout. You can easily replace this...

C#
lock(dictionary)
  if(!dictionary.ContainsKey(key))
    dictionary.Add(key, newValue); 

With this...

C#
Lock(dictionary,()=>{
  if(!dictionary.ContainsKey(key))
    dictionary.Add(key, newValue);
}, 1000); // If a lock could be acquired, a timeout exeption is thrown.

Or this...

C#
if(
  !Lock(
    dictionary,
    ()=> if(!dictionary.ContainsKey(key)) dictionary.Add(key, newValue),
    1000, false)
) // If a lock could not be acquired...
  doSomethingElse();

Extensions and delegate functions can be very helpful in managing our synchronization code. This extension properly and safely uses the Monitor.TryEnter method and ensures a proper exit. It is very similar to what the lock keyword does behind the scenes but with a timeout. So regardless of what the delegated Action does, we avoid leaving locks open and provide a means of handling if a lock could not be acquired.

But doesn't simply locking still have problems with performance if we have to lock on read and write?

Read/Write Locks 

You can potentially gain some performance benefit by using a Read/Write lock when you have a set of operations that could cause unsafe synchronization. Read/Write locks typically allow for multiple reads and one write. So basically, while there is no write operation active, multiple reads can come and go without being blocked. Once a write operation is requested, the write operation is blocked until the previous reads are finished, and subsequent reads are blocked until the write operation is finished.

Let's start by simplifying and fool-proofing our code with some extensions for ReaderWriterLockSlim:

C#
/// <summary>
/// ReaderWriterLockSlim extension for synchronizing read access.
/// </summary>
/// <param name="target">ReaderWriterLockSlim to execute on.</param>
/// <param name="closure">Action to execute once a lock is acquired.</param>
/// <param name="millisecondsTimeout">Indicates if and for how long a timeout is used to acquire a lock.</param>
/// <param name="throwsOnTimeout">If this parameter is true,
/// then if a timeout value is reached, an exception is thrown.</param>
/// <returns>Returns false if a timeout is reached.</returns>
public static bool Read(this ReaderWriterLockSlim target,
  Action closure, int? millisecondsTimeout = null, bool throwsOnTimeout = false)
{
  Contract.Requires<ArgumentNullException>(closure != null);
  Contract.Requires<ArgumentOutOfRangeException>(
   millisecondsTimeout==null || millisecondsTimeout >= 0
  );

  bool lockHeld = false;
  try
  {
    if(millisecondsTimeout==null)
      target.EnterReadLock();
    else if (!target.TryEnterReadLock(millisecondsTimeout.Value))
    {
      if (throwsOnTimeout)
        throw new TimeoutException(
          "Could not gain a read lock within the timeout specified. "+
          "(millisecondsTimeout=" + millisecondsTimeout.Value + ") ");

      return false;
    }
    lockHeld = true;
    closure();
  }
  finally
  {
    if (lockHeld)
      target.ExitReadLock();
  }

  return lockHeld;
}
C#
/// <summary>
/// ReaderWriterLockSlim extension for synchronizing write access.
/// </summary>
/// <param name="target">ReaderWriterLockSlim to execute on.</param>
/// <param name="closure">Action to execute once a lock is acquired.</param>
/// <param name="millisecondsTimeout">Indicates if and for how long a timeout is used to acquire a lock.</param>
/// <param name="throwsOnTimeout">If this parameter is true,
/// then if a timeout value is reached, an exception is thrown.</param>
/// <returns>Returns false if a timeout is reached.</returns>
public static bool Write(this ReaderWriterLockSlim target,
  Action closure, int? millisecondsTimeout = null, bool throwsOnTimeout = false)
{
  Contract.Requires<ArgumentNullException>(closure != null);
  Contract.Requires<ArgumentOutOfRangeException>(
   millisecondsTimeout==null || millisecondsTimeout >= 0
  );

  bool lockHeld = false;
  try
  {
    if (millisecondsTimeout == null)
      target.EnterWriteLock();
    else if (!target.TryEnterWriteLock(millisecondsTimeout.Value))
    {
      if (throwsOnTimeout)
        throw new TimeoutException(
          "Could not gain a read lock within the timeout specified. "+
          "(millisecondsTimeout=" + millisecondsTimeout.Value + ") ");
      return false;
    }
    lockHeld = true;
    closure();
  }
  finally
  {
    if (lockHeld)
      target.ExitWriteLock();
  }

  return lockHeld;
}

What the above extensions do is provide a simple mechanism for executing read and write operations while holding a respective lock. They also allow for timeouts like we demonstrated in the previous section.

Now with our new extensions in place, let's try and improve our previous example...

C#
// Some permanent instance in respect to the dictionary.
ReaderWriterLockSlim sync = new ReaderWriterLockSlim();

// Thread A:
sync.Write( ()=>
  if(!dictionary.ContainsKey(key))
    dictionary.Add(key, newValue)
);

// Thread B:
object value;
sync.Read( ()=>
  value = dictionary.ContainsKey(key) ?
    dictionary[key] : null
);

This improved example is truly synchronizing.  It does not break the multiple reads while writing rule. So if accurate synchronization is important, then this is the way to go. As is, it will not be as performant as simply using the lock keyword while attempting to add. But if you are using a different type of collection/dictionary or have more complicated synchronization actions, this may work better, especially if there are a significantly larger ratio of reads to the number of writes.

I've provided a full list of useful extensions here: ReaderWriterLockSlim Extensions

Note: Keep in mind that with a ReaderWriterLockSlim, you can specify to allow recursion, but you cannot simply enter a write while already owning a read or vice versa. But you can upgrade or downgrade your locking but an there can be only one upgraded lock at a time... See ReaderWriterLockSlim documentation for more information. 

Double Check Locking (DCL) Pattern = Bad Juju? 

In an attempt to improve performance for common synchronization operations, we can apply concept of first checking for the need to lock before actually doing so.

First let's look at synchronizing a single property:  
private readonly object _countLock = new object();
private int _count;
public int Count {
  get {
    return _count; 
  }
}

public int IncrementCount() {
  lock(_countLock)
    return _count++;
}

In the above sample we can query our internal property and we change it through a single public interface.  There is no need to synchronize the read portion because upon exiting the read only accessor, the value may have already changed.  If there was more of a need to synchronize this property, then we would need to expose a SyncRoot that the external code could synchronize with.

There is nothing inherently wrong with this code as-is.  The real problems here are the interface itself.  If accuracy is important then this simple code may not be everything we need.  We may expect that when we call for count, we get the value for it at that moment.  That's just not possible without already holding a lock for it.

Next, let's apply the DCL to a single property:  

private bool _hasValue;
public T HasValueAlready {
  get {
    return _hasValue;
  }
}

private readonly object _lock = new object();
private T _value;
public T Value {
  get {
    if(!_hasValue) {
      lock(_lock) {
         if(!_hasValue) {
           // The important part is NOT TO CHANGE THE DCL CONDITION UNTIL THE END.
           // placing _hasValue = true here would break the pattern
           // and return inconsistent (null) results.

           _value = /* some code that generates the value */;

           // Place the condition update after the real work is done.
           _hasValue = true;
         }
      }
    }
    return _value; 
  }
}

The above sample assumes that once the value is established, it doesn't revert.
This is a safe example of the DCL.  The Boolean value is the 'gate' by which the value is synchronized and all the synchronization occurs within the Value accessor.  If you were to use _value==null or something similar as your check, you would still need to be sure to only update _value at the very end of the pattern.  But there are interesting risks when using the value as your 'gate' which are a bit out of scope for this article.

Use LazyInitializer.EnsureInitialized whenever possible:

If you were thinking of using the DCL to initialize a property, consider using LazyInitializer.EnsureInitialized instead.  It's easy and does all the magic for you with one small caveat: It has to be an object and it can't initialize as null.  If you end up needing something that doesn't fit that description, just be careful and make sure you implement the DCL correctly.

Next, let's apply the DCL to a dictionary:
C#
object value;
if(!dictionary.TryGetValue(key, out value))
  lock(dictionary)
    if(!dictionary.TryGetValue(key, out value))
      dictionary.Add(key, value = newValue);

The above sample is basically a GetOrAdd method. It attempts to get the value if it already exists, and then if not, it will add it. Conceptually, this streamlines the process so that only an add requires a lock. You have to check it twice because by the time a lock is acquired, another thread may have already added the value. If the above sample was 100% isolated and no other code had access to the dictionary then it would be thread safe. Thread safety breaks when access is allowed to the dictionary outside of the pattern.

But what about performance?  That depends. Under most cases you could expect this to help improve performance based on how often you are getting the value for the first time versus subsequent times. Now let's look at another example:

C#
object value;
if(!dictionary.TryGetValue(key, out value))
  lock(dictionary)
    if(!dictionary.TryGetValue(key, out value))
      dictionary.Add(key, value = newValueFactory()); 

In this example, instead of simply adding the value, we execute a value factory to generate the value. Although thread safe under isolation, this can cause very poor performance. The longer newValueFactory takes to return a value, the less performant this operation becomes. Its only saving grace is that reading existing values is not blocked. The problem is that when reading missing values, there is not not only a potentially long wait, there is also a queue of other threads waiting their turn to add other values to the collection.

What if we apply optimism to our formula...

C#
object value;
if(!dictionary.TryGetValue(key, out value)) {
  value = newValueFactory();
  lock(dictionary)
    if(!dictionary.TryGetValue(key, out value))
      dictionary.Add(key, value);
} 

This sample solves one problem but creates another. Instead of locking while generating the value, we generate the value before applying the lock. This means we eliminate long lock times, but we will very likely be executing redundant processes and wasting CPU cycles.

ConcurrentDictionary's GetOrAdd method has the same problem. You can pass it a value-factory and it will efficiently manage adding and getting the value, but the value-factory can be executed multiple times and you can't be sure which one is the winner until after it has completed the method. This is great performance for cases where you may simply need to initialize a lightweight object. But in the case where you may being doing some heavy or longer running operations, you will end up wasting cycles as threads compete to be the first to generate and add the value.  

DCL Bad?  Yes, assume it's bad, almost always...  

The real problem with the DCL pattern is that most of the time it's not implemented in a way that doesn't guarantee to not break the multiple reads while no writes rule. Once again, if you got this far you:

  1. probably got lucky,
  2. didn't see the symptoms of an underlying problem,
  3. or are not really exposing your code to an asynchronous environment.

Combining the DCL and Read/Write  

As you wind through the long path in the dark woods of threading and synchronization you eventually arrive at the need to use Read/Write locks to maximize performance balanced with reliability.

Mechanism 1: Basic Collection/Dictionary Access 

Once again, we look over at System.Collections.Concurrent and see that it may have exactly what we need. But sometimes we are not given the luxury of using these classes or their non-blocking ways just don't cut it.

First things first, we must synchronize read/write access to our data in order to avoid breaking the multiple reads with no write rule. I will use a SortedDictionary for this example since it not only isn't synchronized, but it offers us an extra benefit of keeping our entries sorted. I will use the ReaderWriterLockSlim Extensions I provided earlier to simplify the sample...

C#
 // Need a single instance to control read/write access to our collection.
private readonly ReadWriterLockSlim _syncLock
  = new ReadWriterLockSlim();

private readonly SortedDictionary<string,object> _myDictionary
  = new SortedDictionary<string,object>();


protected object GetEntryByKeyInternal(string key)
{
    object result = null;
    !_myDictionary.TryGetValue(key, out result);
    return result;
}

public object GetEntryByKey(string key)
{
  return _syncLock.ReadValue(()=>GetEntryByKeyInternal(key));
}

public void AddEntry(string key, object value)
{
  _syncLock.Write(()=>_myDictionary.Add(key,value));
}

public object GetOrAddEntry(string key, object addValue)
{
  object value = null;
  // This incorporates the DCL within a cascade of read, read upgradable,
  // and then write locks.
  _syncLock.ReadWriteConditionalOptimized(
    (isWriteLock) => (value = GetEntryByKeyInternal(key))==null,
    () => _myDictionary.Add(key,value = addValue)
  );
  return value;
}

In the above sample we present some interface methods for synchronizing an internal dictionary. This is pretty basic read/write locking with a special example for GetOrAddEntry. If you review the source code for the ReaderWriterLockSlim Extensions you will see that the

ReadWriteConditionalOptimized method does the following:

  1. Get's a read lock
  2. Then while the read lock is present, tries to get the value from the dictionary.
  3. If no value is present it then gets an upgradable read lock.
  4. Then while the upgraded read lock is present, tries to get the value from the dictionary.
  5. If no value is present it then gets a write lock within the upgraded read.
  6. Then while the write lock is present, it tries to get the value from the dictionary and if no value is present, it updates value and adds the addValue instead.
  7. Returns the value.

This procedure optimizes use of the DCL and ReaderWriterLockSlim. You can only have one upgradable read lock at a time so it's better to do a standard read lock on the first pass.

You may have noticed that the extension rechecks the for the value after upgrading to a write lock.  Although unnecessary for later versions of ReaderWriterLockSlim, my take on this was I'd prefer to be sure since it seems possible that the condition function's return value could change from one lock to the next. 

Some of you at this point are probably saying, duh, or of course, or okay what's next. But this presents an initial important procedure for synchronizing the collection/dictionary using read/write locks. You really can't get away with not synchronizing adding a value and getting a value. It either has to be done with the lock statement or with a read/write mechanism.

So we're done right?  Only if you are only concerned about the performance when updating a collection's contents. But what about when you need to synchronize an individual key/value?  What if you are using a value factory and don't want to abuse CPU cycles unnecessarily?

Mechanism 2: Block Threads While A Key/Value Is Created 

You may want to stop here. This one is a bit to wrap your head around. People (including me) have written entire classes to handle this. You may want to refactor your code to use a concurrent class and wipe your hands. But once again, you may be offered a situation where you don't have that luxury.

Good practice says, if you can avoid locking, do so. Acquiring a lock of any kind takes extra time, and then may block other threads. So whenever possible it's good to avoid locks. One case where this true is when creating a value before adding. If you are generating a value and that operation takes some time to execute, if you only use Mechanism 1, then you may block all threads unnecessarily. The only threads you need to block are the ones that need access to the value you are about to update. How do we do that?

You will still need to leverage the previous mechanism, but now we have to repeat the pattern for every entry in the dictionary!  In actual application I have a base class which will periodically cleanup and dispose of unused ReaderWriterLockSlim objects. But that goes too far off topic into the dispose pattern as well as cleanup threads and cleanup optimization. So to keep it simple, let's assume for this example that our ReaderWriterLockSlim registry just holds on to the locks until the parent class is thrown away.

First let's prep the entry lock registry:

C#
private readonly ConcurrentDictionary<string, ReaderWriterLockSlim> _entryLockRegistry
  = new ConcurrentDictionary<string, ReaderWriterLockSlim>();

protected ReaderWriterLockSlim GetLock(string key)
{
    ReaderWriterLockSlim result;
    {
        // Since more than one ReaderWriterLockSlim can created,
        // we need to dispose of the unneeded one.
        ReaderWriterLockSlim created = null;
        result = Locks.GetOrAdd(key, k => created = new ReaderWriterLockSlim());
        if (created != null && created != result)
            created.Dispose();
    }
    return result;
}

Just to be safe let's update our public GetEntryByKey and AddEntry methods...

C#
public object GetEntryByKey(string key)
{
  return GetLock(key).ReadValue(()=> // Always lock on key first to prevent deadlocks.
    _syncLock.ReadValue(()=>GetEntryByKeyInternal(key))
  );
}

public void AddEntry(string key, object value)
{
  GetLock(key).Write(()=> // Always lock on key first to prevent deadlocks.
    _syncLock.Write(()=>_myDictionary.Add(key,value))
  );
}  

Now let's add our factory GetOrAddEntry method...

C#
public object GetOrAddEntry(string key, Func<object> addValueFactory)
{
  object value = null;
  GetLock(key).ReadWriteConditionalOptimized(
    (isWriteLock) => (value = GetEntryByKeyInternal(key))==null,
    () => value = GetOrAddEntry(key,addValueFactory())
  );
  return value;
}

The above code adds another method for adding an entry via a factory. It looks basically the same as the previous GetOrAddEntry method, except with a few differences:

It is using a unique ReaderWriterLockSlim for that specific key. And while holding a lock on that key, it leverages the previous GetOrAddEntry method to add the value.

This two level deep chain of locks ensures that the actual read/write on the collection is at it's absolute minimum. Where the creation and adding of a value will block subsequent threads attempting until the first thread is finished. Thus truly synchronizing with optimal performance.

Obvious Criticism: The downside here is that as I said earlier, the ReaderWriterLockSlim objects don't get cleaned up and just hang out, but that's another topic. The important thing to remember is you may actually negatively impact performance if you are too aggressively cleaning up and disposing of them.

Conclusion/Summary

I really want to thank everyone who has criticized and scrutinized my samples and code before. It hasn't been easy to get it right. But I really feel that from your advice, I've done my best to understand and make something that really works. The  journey has been difficult. I'm sure there are plenty that agree that threading and synchronization is not an easy topic. But there are solutions. Some may be easy, some may be overkill. But there is an answer.

Important tips/points to walk away with...  

  • Repetitive lightweight operations may actually end up being slower when done in parallel.
  • Stay away from multiple threads and synchronization if you don't need it.
  • Use concurrent classes if necessary but keep in mind what they do under the hood.
  • Write simpler and safer locking and synchronization code using extensions.
  • The lock keyword (Monitor.Enter) is actually quite fast and definitely faster than ReaderWriterLockSlim. Don't be afraid to use it when prototyping code for synchronization.
  • Use timeouts!  Under most circumstances there is no reason you shouldn't have a timeout if you need a lock. When you think you don't need it, may actually be the time you need it the most. If you expect your locks to be quick, then any timeout will help reveal dead-locks or deeper issues with your code.
  • The double check locking pattern is not thread safe unless the checking part is guaranteed thread safe under all conditions. In the end, the DCL is has merit for performance when properly read locking and if write operations may take some time to complete. 

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Web Developer
United States United States
Just a crazy developer with crazy ideas of grandure.

What's that? Anal probe? NO, I said "Anal CODE!" Get that thing away, and get back to EditPlus.

Comments and Discussions

 
QuestionYou should also mention "volatile" Pin
SG 19818-Feb-13 5:54
SG 19818-Feb-13 5:54 
AnswerRe: You should also mention "volatile" Pin
essence18-Feb-13 10:00
essence18-Feb-13 10:00 
GeneralRe: You should also mention "volatile" Pin
SG 19819-Feb-13 5:53
SG 19819-Feb-13 5:53 
GeneralRe: You should also mention "volatile" Pin
essence19-Feb-13 8:07
essence19-Feb-13 8:07 
GeneralRe: You should also mention "volatile" Pin
essence19-Feb-13 8:22
essence19-Feb-13 8:22 
GeneralRe: You should also mention "volatile" Pin
SG 19822-Feb-13 2:36
SG 19822-Feb-13 2:36 
GeneralRe: You should also mention "volatile" Pin
essence1-Mar-13 14:16
essence1-Mar-13 14:16 
AnswerRe: You should also mention "volatile" Pin
essence18-Feb-13 10:04
essence18-Feb-13 10:04 
AnswerRe: You should also mention "volatile" Pin
essence18-Feb-13 10:08
essence18-Feb-13 10:08 
AnswerRe: You should also mention "volatile" Pin
essence20-Feb-13 8:22
essence20-Feb-13 8:22 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.