Click here to Skip to main content
15,880,651 members
Articles / Programming Languages / C#
Article

A High Performance Multi-Threaded LRU Cache

Rate me:
Please Sign up or sign in to vote.
4.82/5 (15 votes)
3 Feb 2008CPOL5 min read 138.9K   2K   66   22
This implementation of an LRU Cache attempts to provide a fast and reliable access to recently used data in a multi-threaded environment.

Introduction

A LRU Cache is a key-value based data container that is constrained by size and/or age, removing the least recently used objects first. This algorithm requires keeping track of the most recent time each object is accessed, which can be expensive to ensure the algorithm always discards the least recently used item.

Other LRU Algorithms

Most LRU Caching algorithms consist of two parts: a dictionary and a list. The dictionary guarantees quick access to your data, and the list, ordered by age of the objects, controls the lifespan of objects and determines which objects are to be removed first.

A simple LRU Cache implementation uses a doubly linked list; adding new items to the head, removing items from the tail, and moving any existing items to the head when referenced (touched). This algorithm is good for single threaded applications but becomes very slow in a multi-threaded environment. In a multi-threaded app, every time the linked list is modified, it must be locked. This requires thread locks on insert, read, and delete operations, causing the slowness.

Another algorithm is to mark each item’s age with a timestamp (DateTime or incremented Integer) value when touched. When the cache becomes full, the list of items must be sorted and truncated. This keeps insert and read operations very fast because no locks are required, but makes delete painfully slow (normally O(N * log N), for sorting).

Overview

The algorithm I choose, divides items into time slices I call AgeBags. Items are added to the current AgeBag until the bag becomes full or the designated time span expires, then that AgeBag is closed and the next AgeBag becomes current. Items that are touched are marked with an index of the current bag. When the cache gets too full/old, the oldest AgeBag is emptied, moving any nodes that have been touched to the correct AgeBag and removing the rest of the nodes in the bag. This allows reads to be performed without locking, and makes deletes much faster because only items that are in the oldest AgeBag are ever moved and no real sorting is ever performed.

In this system, even though the exact order of items is not maintained, this is still a true LRU Cache because everything that remains in the oldest bag (after moves are made for the touched items) is removed at the same time. Therefore, no item remains in the cache that is older than the removed items.

My implementation of LRUCache contains two distinct sections which are coded as subclasses. The LifespanMgr class tracks the item usage and determines which items to remove when the cache gets full/old. The Index class provides Dictionary key/value access to all objects stored in the cache. The Index has delegates to calculate the key of any item added to the cache, and to load an item by key/value from an external source.

Instead of storing item nodes in the Index, the Index holds WeakReference objects that point at the item nodes. By doing this, the Index does not prevent items from being garbage collected. This also allows old items to be removed from the AgeBag without having to remove the items from the index. Once a Node is removed from the LifespanMgr, it becomes eligible for garbage collection. The Node is not removed from the Indexes immediately. If an Index retrieves the node prior to garbage collection, it is reinserted into the current AgeBag’s node list. If it has already been garbage collected, a new object gets loaded. If the Index size exceeds twice the cache’s capacity, the Index is cleared and rebuilt.

Locking

Before I get into the code, I should provide a brief explanation of all the locking mechanisms used by the LRUCache:

The first and simplest is the Interlocked class which ensures that increment and decrement operations occur without context switching. This is all that is required to make increment and decrement operations thread safe, and is much faster than other locking methods.

The next locking mechanism is the Monitor class. This class is called whenever you use the C# lock keyword. One method of note is Monitor.TryEnter(obj), which immediately returns a bool specifying if a lock was acquired. Unlike the standard Enter() method, after the first thread gets the lock, all other threads will skip the operation instead of waiting for the first thread to complete. This is good for cases where speed is more important than the action being locked. It is also ideal for cases when the locked code only needs to be performed by one thread.

C#
if(Monitor.TryEnter(obj))
  try {
    //do something if lock is not busy
  }
finally {
    Monitor.Exit(obj);
}

The last locking mechanism used is ReaderWriterLock. At any given time, it allows either concurrent read access for multiple threads, or write access for a single thread. A ReaderWriterLock provides better throughput than Monitor because of its concurrent read access. Because using ReaderWriter locks can get a little tricky, I use a wrapper class that simplifies locking a provided delegate.

Code Review

Because the LifespanMgr is the most interesting part of the code, I will spend most of the remaining article discussing it; however, the code is highly commented so you shouldn’t have any trouble following the rest of the code.

C#
public void Touch()
{
    if( _value != null && ageBag != _mgr._currentBag )
    {
        if( ageBag == null )
            lock( _mgr )
                if( ageBag == null )
                {
                    // if node.AgeBag==null then the object is not
                    // currently managed by LifespanMgr so add it
                    next = _mgr._currentBag.first;
                    _mgr._currentBag.first = this;
                    Interlocked.Increment( ref _mgr._owner._curCount );
                }
        ageBag = _mgr._currentBag;
        Interlocked.Increment( ref _mgr._currentSize );
    }
    _mgr.CheckValid();
}

Each time an indexed item is added or referenced, LifespanMgr.Node.Touch() is called which (re)inserts the node if needed, and points the node's ageBag variable at the current AgeBag. Also, in the touch method, LifespanMgr.CheckValid() is called.

C#
public void CheckValid()
{
    DateTime now = DateTime.Now;
    // if lock is currently held then skip and let next Touch perform cleanup.
    if( (_currentSize > _bagItemLimit || now > _nextValidCheck) 
             && Monitor.TryEnter( this ) )
        try
        {
            if( (_currentSize > _bagItemLimit || now > _nextValidCheck) )
            {
                // if cache is no longer valid throw contents
                // away and start over, else cleanup old items
                if( _current > 1000000 || (_owner._isValid != null 
                                          && !_owner._isValid()) )
                    _owner.Clear();
                else
                    CleanUp( now );
            }
        }
        finally
        {
            Monitor.Exit( this );
        }
}

CheckValid only performs an action if the AgeBag gets full or the designated time span expires. When either of those cases are met, an optional IsValid() delegate is called which checks the health of the overall cache. If the cache is invalid or out of date, all items in the cache are removed and items will be reloaded the next time an index accesses them. If IsValid returns true, the LifespanMgr.Cleanup() method is called.

C#
public void CleanUp( DateTime now )
{
    if( _current != _oldest )
        lock( this )
        {
            //calculate how many items should be removed
            DateTime maxAge = now.Subtract( _maxAge );
            DateTime minAge = now.Subtract( _minAge );
            int itemsToRemove = _owner._curCount - _owner._capacity;
            AgeBag bag = _bags[_oldest % _size];
            while( _current != _oldest && (_current-_oldest>_size - 5 
                    || bag.startTime < maxAge || (itemsToRemove > 0 
                    && bag.stopTime > minAge)) )
            {
                // cache is still too big / old so remove oldest bag
                Node node = bag.first;
                bag.first = null;
                while( node != null )
                {
                    Node next = node.next;
                    node.next = null;
                    if( node.Value != null && node.ageBag != null )
                        if( node.ageBag == bag )
                        {
                            // item has not been touched since bag was
                            // closed, so remove it from LifespanMgr
                            ++itemsToRemove;
                            node.ageBag = null;
                            Interlocked.Decrement( ref _owner._curCount );
                        }
                        else
                        {
                            // item has been touched and should
                            // be moved to correct age bag now
                            node.next = node.ageBag.first;
                            node.ageBag.first = node;
                        }
                    node = next;
                }
                // increment oldest bag
                bag = _bags[(++_oldest) % _size];
            }
            OpenCurrentBag( now, ++_current );
            CheckIndexValid();
        }
}

Cleanup does the grunt work of moving touched items out of the last bag and deleting the rest. It also calls CheckIndexValid() to see if indexes need to be recreated, and OpenCurrentBag() which sets up the next current AgeBag.

Usage

The UserCache.cs file contains an example of how the cache can be used:

C#
/// <summary>retrieve items by userid</summary>
public UserData FindByUserID( int userid )
{
    return _findByUserID[userid];
}

/// <summary>retrieve items by username</summary>
public UserData FindByUserName( string username )
{
    return _findByUserName[username];
}

/// <summary>constructor creates cache and multiple indexes</summary>
private UserCache() : base( 10000, TimeSpan.FromMinutes( 1 ), 
                                   TimeSpan.FromHours( 1 ), null )
{
    _isValid = IsDataValid;
    _findByUserID = AddIndex<int>( "UserID", delegate( UserData user ) 
                                 { return user.UserID; }, LoadFromUserID );
    _findByUserName = AddIndex<string>( "UserName", delegate( UserData user ) 
                                      { return user.UserName; }, LoadFromUserName );
    IsDataValid();
}

Summary

This implementation of LRUCache attempts to provide a fast, reliable access to recently used data in a multithreaded environment.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Software Developer (Senior) NewsGator.com
United States United States
Brian Agnes has been professionally programming for 15 years using a variety of languages. Brian started using C# in 2002 and is a MCAD charter member.

Brian is currently living in Denver Colorado and working as a Senior Developer for NewsGator.com (a leader in RSS aggregation).

Brian has been a regular presenter at local INETA dot net user groups.

Comments and Discussions

 
BugPotential memory leak when lots of cache misses happen Pin
Dennis Doomen14-Dec-15 22:23
Dennis Doomen14-Dec-15 22:23 
QuestionHere is an alternative ConcurrentDictionary + LinkedList implementation... Pin
dmreference28-Jan-14 10:16
dmreference28-Jan-14 10:16 
QuestionLifespanMgr Bug Pin
KidCrazy23-Jan-14 11:35
KidCrazy23-Jan-14 11:35 
QuestionWhat would be the implementation with .NET 4.0 Pin
I'm Chris24-Mar-10 5:23
professionalI'm Chris24-Mar-10 5:23 
AnswerRe: What would be the implementation with .NET 4.0 Pin
brian_agnes24-Mar-10 7:48
brian_agnes24-Mar-10 7:48 
This is an interesting idea...

The ConcurrentDictionary may perform better than my solution (which was written several years prior to that class becoming available).

If anyone has the time to alter this and run preformance tests I would love to see the results.
Brian Agnes

-- Genius is 1 percent inspiration and 99% perspiration, which is why programmers sometimes smell really bad.

GeneralQuestion about Add & Remove method [modified] Pin
I'm Chris24-Mar-10 4:46
professionalI'm Chris24-Mar-10 4:46 
GeneralRe: Question about Add & Remove method Pin
brian_agnes24-Mar-10 7:47
brian_agnes24-Mar-10 7:47 
GeneralNice article, Algorithm name Pin
JasonShort18-Feb-09 8:45
JasonShort18-Feb-09 8:45 
GeneralSorry for not responding sooner Pin
brian_agnes1-Apr-08 11:35
brian_agnes1-Apr-08 11:35 
GeneralQuery with multiple results Pin
alexruf31-Mar-08 21:00
alexruf31-Mar-08 21:00 
GeneralRe: Query with multiple results Pin
brian_agnes1-Apr-08 13:47
brian_agnes1-Apr-08 13:47 
GeneralLifetime? [modified] Pin
alexruf29-Mar-08 5:55
alexruf29-Mar-08 5:55 
QuestionChange to Cleanup function? Pin
philip.ivanovski19-Mar-08 14:22
philip.ivanovski19-Mar-08 14:22 
AnswerRe: Change to Cleanup function? Pin
brian_agnes1-Apr-08 13:29
brian_agnes1-Apr-08 13:29 
GeneralRe: Change to Cleanup function? Pin
kiquenet.com14-Jul-08 0:26
professionalkiquenet.com14-Jul-08 0:26 
QuestionASP.NET Cache Pin
Eng. Jalal27-Feb-08 21:48
Eng. Jalal27-Feb-08 21:48 
AnswerRe: ASP.NET Cache Pin
dave.dolan16-Jan-10 17:34
dave.dolan16-Jan-10 17:34 
Generalreloading values Pin
janpon18-Feb-08 2:01
janpon18-Feb-08 2:01 
GeneralRe: reloading values Pin
brian_agnes1-Apr-08 13:05
brian_agnes1-Apr-08 13:05 
GeneralNew feature + performance question Pin
I'm Chris14-Feb-08 9:00
professionalI'm Chris14-Feb-08 9:00 
GeneralRe: New feature + performance question Pin
brian_agnes1-Apr-08 12:54
brian_agnes1-Apr-08 12:54 
GeneralRe: New feature + performance question Pin
I'm Chris24-Mar-10 5:17
professionalI'm Chris24-Mar-10 5:17 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.