Performance of different lock methods in .net
Something that pops up from time to time is what kind of thread synchronization technique should be used when dealing with shared ´resources in a multi-threaded environment. And yes, I know that the best would be to not share resources at all but that is not always an option, especially when working

Something that pops up from time to time is what kind of thread synchronization technique should be used when dealing with shared ´resources in a multi-threaded environment. And yes, I know that the best would be to not share resources at all but that is not always an option, especially when working with legacy code-bases and applications that have organically grown over time.
Shared resource access types
I've included the following operations in the test.
public interface ICacheTest
{
void Add(CacheItem item);
CacheItem Get(Guid id);
CacheItem Search(Guid id);
void Clear();
}
Add
Adds an item to the dictionary
Get
Gets an item with key lookup.
Search
There will always be scenarios when the key lookup isn't enough. Most code-bases that I've seen tend to go the LINQ way. Some have extra lookup dictionaries. For this test I wanted to include something that takes a little more time then a lookup, so LINQ query is used.
Techniques in the test
Unsafe
No thread synchronization, the test class basically just wraps an Dictionary<Guid, CacheItem>
public class CacheUnsafe : ICacheTest
{
private readonly Dictionary<Guid, CacheItem> _cache = new Dictionary<Guid, CacheItem>();
public void Add(CacheItem item)
{
_cache.Add(item.Id, item);
}
public CacheItem Get(Guid id)
{
if (_cache.ContainsKey(id))
return _cache[id];
return null;
}
public CacheItem Search(Guid id)
{
var item = (from x in _cache.Values where x.Id.Equals(id) select x).FirstOrDefault();
return item;
}
public void Clear()
{
_cache.Clear();
}
}
Full lock
using the lock key-word to just allow one thread at a time to execute within the block of code. This is usually my go-to method as it is easy to implement and usually performs good enough.
public class CacheFullLock : ICacheTest
{
private readonly Dictionary<Guid, CacheItem> _cache = new Dictionary<Guid, CacheItem>();
public void Add(CacheItem item)
{
lock (_cache)
{
_cache.Add(item.Id, item);
}
}
public CacheItem Get(Guid id)
{
lock (_cache)
{
if(_cache.ContainsKey(id))
return _cache[id];
}
return null;
}
public CacheItem Search(Guid id)
{
lock (_cache)
{
var item = (from x in _cache.Values where x.Id.Equals(id) select x).FirstOrDefault();
return item;
}
}
public void Clear()
{
lock (_cache)
{
_cache.Clear();
}
}
}
Full lock with indexed search
Same as the full lock but also tries to optimize the search to a lookup instead. So in this simplified test the lookup looks kinda unnecessary but in the real world it could be looking up with multiple parameters instead of running the Linq query.
public CacheItem Search(Guid id)
{
lock (_cache)
{
if (_index.ContainsKey(id))
return _cache[_index[id]];
return null;
}
}
ReaderWriterLock
Using the ReaderWriterLock class. Allows multiple readers to access the resource at the same time, but only 1 writer at a time.
public class CacheWriteLock : ICacheTest
{
private readonly Dictionary<Guid, CacheItem> _cache = new Dictionary<Guid, CacheItem>();
private readonly ReaderWriterLock _lock = new ReaderWriterLock();
public void Add(CacheItem item)
{
_lock.AcquireWriterLock(1000);
try
{
_cache.Add(item.Id, item);
}
finally
{
_lock.ReleaseWriterLock();
}
}
public CacheItem Get(Guid id)
{
_lock.AcquireReaderLock(1000);
try
{
if (_cache.ContainsKey(id))
return _cache[id];
return null;
}
finally
{
_lock.ReleaseReaderLock();
}
}
public CacheItem Search(Guid id)
{
_lock.AcquireReaderLock(1000);
try
{
var item = (from x in _cache.Values where x.Id.Equals(id) select x).FirstOrDefault();
return item;
}
finally
{
_lock.ReleaseReaderLock();
}
}
public void Clear()
{
_lock.AcquireWriterLock(1000);
try
{
_cache.Clear();
}
finally
{
_lock.ReleaseWriterLock();
}
}
}
ConcurrentDictionary
The built in concurrent dictionary that is thread-safe. I tried to use this a long way ago when it was first introduced, but found it under-performing compared to a simple Full lock strategy. So up to the test again. At least it looks easy to implement and you do not have to care about the thread safety yourself at it is handled internally.
public class CacheConcurrent : ICacheTest
{
private readonly ConcurrentDictionary<Guid, CacheItem> _cache = new ConcurrentDictionary<Guid, CacheItem>();
public void Add(CacheItem item)
{
_cache.AddOrUpdate(item.Id, item, (key, oldValue) => item);
}
public CacheItem Get(Guid id)
{
if (_cache.ContainsKey(id))
return _cache[id];
return null;
}
public CacheItem Search(Guid id)
{
var item = (from x in _cache.Values where x.Id.Equals(id) select x).FirstOrDefault();
return item;
}
public void Clear()
{
_cache.Clear();
}
}
Single-threaded test
The test rig:
private void ExecuteTestSingle(ICacheTest cache, int dataCount, int iterations)
{
var type = cache.GetType().Name;
try
{
var testData = GenerateTestData(dataCount);
var addTimings = new RunningAverage();
var clearTimings = new RunningAverage();
var getTimings = new RunningAverage();
var searchTimings = new RunningAverage();
for (int i = 0; i < iterations; i++)
{
for (int dataIndex = 0; dataIndex < testData.Count; dataIndex++)
{
var item = testData[dataIndex];
addTimings.Add(TimeAction(() =>
{
cache.Add(item);
}));
getTimings.Add(TimeAction(() =>
{
cache.Get(item.Id);
}));
searchTimings.Add(TimeAction(() =>
{
cache.Search(item.Id);
}));
}
clearTimings.Add(TimeAction(cache.Clear));
}
Log($"{type}\tAdd\tDataCount\t{dataCount}\tIterations:\t{iterations}\tAverage:\t{addTimings.Average:0.000}\tticks");
Log($"{type}\tGet\tDataCount\t{dataCount}\tIterations:\t{iterations}\tAverage:\t{getTimings.Average:0.000}\tticks");
Log($"{type}\tSearch\tDataCount\t{dataCount}\tIterations:\t{iterations}\tAverage:\t{searchTimings.Average:0.000}\tticks");
Log($"{type}\tClear\tDataCount\t{dataCount}\tIterations:\t{iterations}\tAverage:\t{clearTimings.Average:0.000}\tticks");
}
catch (Exception ex)
{
Log($"{type}\tFail\tDataCount\t{dataCount}\tIterations:\t{iterations}\tFailed\t{ex.Message}");
}
}
private float TimeAction(Action action)
{
var sw = Stopwatch.StartNew();
action();
sw.Stop();
return sw.ElapsedTicks;
}
Multi-threaded test
- 1 thread Adds
- 1 thread Gets
- 1 thread Searches
The test rig:
private void ExecuteTestMulti(ICacheTest cache, int dataCount, int iterations)
{
var type = cache.GetType().Name;
try
{
var testData = GenerateTestData(dataCount);
var getAverage = new RunningAverage();
var searchAverage = new RunningAverage();
var writeAverage = new RunningAverage();
for (int i = 0; i < iterations; i++)
{
foreach (var data in testData)
cache.Add(data);
var d = testData.GetRandom();
var startTime = DateTime.UtcNow.AddMilliseconds(8);
var getter = CreateThread(startTime, () => { cache.Get(d.Id); });
var searcher = CreateThread(startTime, () => { cache.Search(d.Id); });
var writer = CreateThread(startTime, () => {cache.Add(CacheItem.Create());});
getter.Thread.Start(startTime);
writer.Thread.Start(startTime);
searcher.Thread.Start(startTime);
getter.Thread.Join();
searcher.Thread.Join();
writer.Thread.Join();
if (!getter.Verify(type, Log)
|| !searcher.Verify(type, Log)
|| !writer.Verify(type, Log))
{
return;
}
getAverage.Add(getter.WorkIterations);
searchAverage.Add(searcher.WorkIterations);
writeAverage.Add(writer.WorkIterations);
cache.Clear();
}
Log($"{type}\tGets\tDataCount\t{dataCount}\tIterations:\t{iterations}\tAverage:\t{getAverage.Average:0}\tgets");
Log($"{type}\tSearch\tDataCount\t{dataCount}\tIterations:\t{iterations}\tAverage:\t{searchAverage.Average:0}\tsearches");
Log($"{type}\tWrites\tDataCount\t{dataCount}\tIterations:\t{iterations}\tAverage:\t{writeAverage.Average:0}\twrites");
}
catch (Exception ex)
{
Log($"{type}\tFail\tDataCount\t{dataCount}\tIterations:\t{iterations}\tFailed\t{ex.Message}");
}
}
private WorkerResult CreateThread(DateTime start, Action action )
{
var work = new WorkerResult();
work.Thread = new Thread((context =>
{
try
{
while (DateTime.UtcNow < start)
Thread.Sleep(0);
var sw = Stopwatch.StartNew();
while (sw.ElapsedMilliseconds < 50)
{
action();
work.Increment();
}
}
catch (Exception ex)
{
work.Exception = ex;
}
}))
{
IsBackground = true
};
return work;
}
private class WorkerResult
{
public Thread Thread { get; set; }
public Exception Exception { get; set; }
public float WorkIterations => _workIterations;
private float _workIterations;
public void Increment()
{
_workIterations++;
}
public bool Verify(string type, Action<string> logger)
{
if (Exception != null)
{
logger($"{type}\tFail\tDataCount\t-\tIterations:\t-\tFailed\t{Exception.Message}");
return false;
}
return true;
}
}
Results
Starting with the single threaded results
So not much surprising here, the ConcurrentDictionary lost the search category and unsurprisingly the for the cause optimized lookup was the fastest that became clear with a little more data.
The Add and Get categories was won by the unsafe lookup, not really interesting for this test but
good for a baseline. The ConcurrentDictionary seems to have lost most categories.
Multi-threaded results
The ConcurrentDictionary actually holds its place quite well, if you use it as a dictionary only. Doing searches in it seems to go very slow.
Conclusions
The numbers seem to jump a little and the conclusion I can draw is that if performance is key to your success, do your own performance tests for your exact scenario. For the general case, I think that I'll stick with the Full-Lock and extra indexes if needed strategy for now. The extra complexity needed to get a ReaderWriterLock is in my mind not worth the risk of me or someone else screwing it up, at least not for a 1 writer, 2 readers scenario.