Click here to Skip to main content
Click here to Skip to main content

A High Performance Multi-Threaded LRU Cache

, 3 Feb 2008
Rate this:
Please Sign up or sign in to vote.
This implementation of an LRU Cache attempts to provide a fast and reliable access to recently used data in a multi-threaded environment.

Introduction

A LRU Cache is a key-value based data container that is constrained by size and/or age, removing the least recently used objects first. This algorithm requires keeping track of the most recent time each object is accessed, which can be expensive to ensure the algorithm always discards the least recently used item.

Other LRU Algorithms

Most LRU Caching algorithms consist of two parts: a dictionary and a list. The dictionary guarantees quick access to your data, and the list, ordered by age of the objects, controls the lifespan of objects and determines which objects are to be removed first.

A simple LRU Cache implementation uses a doubly linked list; adding new items to the head, removing items from the tail, and moving any existing items to the head when referenced (touched). This algorithm is good for single threaded applications but becomes very slow in a multi-threaded environment. In a multi-threaded app, every time the linked list is modified, it must be locked. This requires thread locks on insert, read, and delete operations, causing the slowness.

Another algorithm is to mark each item’s age with a timestamp (DateTime or incremented Integer) value when touched. When the cache becomes full, the list of items must be sorted and truncated. This keeps insert and read operations very fast because no locks are required, but makes delete painfully slow (normally O(N * log N), for sorting).

Overview

The algorithm I choose, divides items into time slices I call AgeBags. Items are added to the current AgeBag until the bag becomes full or the designated time span expires, then that AgeBag is closed and the next AgeBag becomes current. Items that are touched are marked with an index of the current bag. When the cache gets too full/old, the oldest AgeBag is emptied, moving any nodes that have been touched to the correct AgeBag and removing the rest of the nodes in the bag. This allows reads to be performed without locking, and makes deletes much faster because only items that are in the oldest AgeBag are ever moved and no real sorting is ever performed.

In this system, even though the exact order of items is not maintained, this is still a true LRU Cache because everything that remains in the oldest bag (after moves are made for the touched items) is removed at the same time. Therefore, no item remains in the cache that is older than the removed items.

My implementation of LRUCache contains two distinct sections which are coded as subclasses. The LifespanMgr class tracks the item usage and determines which items to remove when the cache gets full/old. The Index class provides Dictionary key/value access to all objects stored in the cache. The Index has delegates to calculate the key of any item added to the cache, and to load an item by key/value from an external source.

Instead of storing item nodes in the Index, the Index holds WeakReference objects that point at the item nodes. By doing this, the Index does not prevent items from being garbage collected. This also allows old items to be removed from the AgeBag without having to remove the items from the index. Once a Node is removed from the LifespanMgr, it becomes eligible for garbage collection. The Node is not removed from the Indexes immediately. If an Index retrieves the node prior to garbage collection, it is reinserted into the current AgeBag’s node list. If it has already been garbage collected, a new object gets loaded. If the Index size exceeds twice the cache’s capacity, the Index is cleared and rebuilt.

Locking

Before I get into the code, I should provide a brief explanation of all the locking mechanisms used by the LRUCache:

The first and simplest is the Interlocked class which ensures that increment and decrement operations occur without context switching. This is all that is required to make increment and decrement operations thread safe, and is much faster than other locking methods.

The next locking mechanism is the Monitor class. This class is called whenever you use the C# lock keyword. One method of note is Monitor.TryEnter(obj), which immediately returns a bool specifying if a lock was acquired. Unlike the standard Enter() method, after the first thread gets the lock, all other threads will skip the operation instead of waiting for the first thread to complete. This is good for cases where speed is more important than the action being locked. It is also ideal for cases when the locked code only needs to be performed by one thread.

if(Monitor.TryEnter(obj))
  try {
    //do something if lock is not busy
  }
finally {
    Monitor.Exit(obj);
}

The last locking mechanism used is ReaderWriterLock. At any given time, it allows either concurrent read access for multiple threads, or write access for a single thread. A ReaderWriterLock provides better throughput than Monitor because of its concurrent read access. Because using ReaderWriter locks can get a little tricky, I use a wrapper class that simplifies locking a provided delegate.

Code Review

Because the LifespanMgr is the most interesting part of the code, I will spend most of the remaining article discussing it; however, the code is highly commented so you shouldn’t have any trouble following the rest of the code.

public void Touch()
{
    if( _value != null && ageBag != _mgr._currentBag )
    {
        if( ageBag == null )
            lock( _mgr )
                if( ageBag == null )
                {
                    // if node.AgeBag==null then the object is not
                    // currently managed by LifespanMgr so add it
                    next = _mgr._currentBag.first;
                    _mgr._currentBag.first = this;
                    Interlocked.Increment( ref _mgr._owner._curCount );
                }
        ageBag = _mgr._currentBag;
        Interlocked.Increment( ref _mgr._currentSize );
    }
    _mgr.CheckValid();
}

Each time an indexed item is added or referenced, LifespanMgr.Node.Touch() is called which (re)inserts the node if needed, and points the node's ageBag variable at the current AgeBag. Also, in the touch method, LifespanMgr.CheckValid() is called.

public void CheckValid()
{
    DateTime now = DateTime.Now;
    // if lock is currently held then skip and let next Touch perform cleanup.
    if( (_currentSize > _bagItemLimit || now > _nextValidCheck) 
             && Monitor.TryEnter( this ) )
        try
        {
            if( (_currentSize > _bagItemLimit || now > _nextValidCheck) )
            {
                // if cache is no longer valid throw contents
                // away and start over, else cleanup old items
                if( _current > 1000000 || (_owner._isValid != null 
                                          && !_owner._isValid()) )
                    _owner.Clear();
                else
                    CleanUp( now );
            }
        }
        finally
        {
            Monitor.Exit( this );
        }
}

CheckValid only performs an action if the AgeBag gets full or the designated time span expires. When either of those cases are met, an optional IsValid() delegate is called which checks the health of the overall cache. If the cache is invalid or out of date, all items in the cache are removed and items will be reloaded the next time an index accesses them. If IsValid returns true, the LifespanMgr.Cleanup() method is called.

public void CleanUp( DateTime now )
{
    if( _current != _oldest )
        lock( this )
        {
            //calculate how many items should be removed
            DateTime maxAge = now.Subtract( _maxAge );
            DateTime minAge = now.Subtract( _minAge );
            int itemsToRemove = _owner._curCount - _owner._capacity;
            AgeBag bag = _bags[_oldest % _size];
            while( _current != _oldest && (_current-_oldest>_size - 5 
                    || bag.startTime < maxAge || (itemsToRemove > 0 
                    && bag.stopTime > minAge)) )
            {
                // cache is still too big / old so remove oldest bag
                Node node = bag.first;
                bag.first = null;
                while( node != null )
                {
                    Node next = node.next;
                    node.next = null;
                    if( node.Value != null && node.ageBag != null )
                        if( node.ageBag == bag )
                        {
                            // item has not been touched since bag was
                            // closed, so remove it from LifespanMgr
                            ++itemsToRemove;
                            node.ageBag = null;
                            Interlocked.Decrement( ref _owner._curCount );
                        }
                        else
                        {
                            // item has been touched and should
                            // be moved to correct age bag now
                            node.next = node.ageBag.first;
                            node.ageBag.first = node;
                        }
                    node = next;
                }
                // increment oldest bag
                bag = _bags[(++_oldest) % _size];
            }
            OpenCurrentBag( now, ++_current );
            CheckIndexValid();
        }
}

Cleanup does the grunt work of moving touched items out of the last bag and deleting the rest. It also calls CheckIndexValid() to see if indexes need to be recreated, and OpenCurrentBag() which sets up the next current AgeBag.

Usage

The UserCache.cs file contains an example of how the cache can be used:

/// <span class="code-SummaryComment"><summary>retrieve items by userid</summary></span>
public UserData FindByUserID( int userid )
{
    return _findByUserID[userid];
}

/// <span class="code-SummaryComment"><summary>retrieve items by username</summary></span>
public UserData FindByUserName( string username )
{
    return _findByUserName[username];
}

/// <span class="code-SummaryComment"><summary>constructor creates cache and multiple indexes</summary></span>
private UserCache() : base( 10000, TimeSpan.FromMinutes( 1 ), 
                                   TimeSpan.FromHours( 1 ), null )
{
    _isValid = IsDataValid;
    _findByUserID = AddIndex<int>( "UserID", delegate( UserData user ) 
                                 { return user.UserID; }, LoadFromUserID );
    _findByUserName = AddIndex<string>( "UserName", delegate( UserData user ) 
                                      { return user.UserName; }, LoadFromUserName );
    IsDataValid();
}

Summary

This implementation of LRUCache attempts to provide a fast, reliable access to recently used data in a multithreaded environment.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

About the Author

brian_agnes
Software Developer (Senior) NewsGator.com
United States United States
Brian Agnes has been professionally programming for 15 years using a variety of languages. Brian started using C# in 2002 and is a MCAD charter member.
 
Brian is currently living in Denver Colorado and working as a Senior Developer for NewsGator.com (a leader in RSS aggregation).
 
Brian has been a regular presenter at local INETA dot net user groups.

Comments and Discussions

 
GeneralNice article, Algorithm name PinmemberJasonShort18-Feb-09 8:45 
I enjoyed your article and implementation. Good job.
 
The algorithm you used is called LRU-K. It is a least recently used cache by K time window. There are a number of references to it on Wikipedia.
 
There is another variant called 2Q where rather than expiring by time windows, you move things across 2 lists when they are hit a certain number of times. Then you expire anything at the bottom of List1 when out of room for cache (greatly simplified explanation).
 
I just wanted to post for anyone interested in your code that there are different algorithms for the choice of when to expire that have runtime impacts for cache hit rates. Of course the application should decide how and when you want to cache.
 
Jason S Short, Ph.D.
VistaDB Software, Inc.

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.

| Advertise | Privacy | Mobile
Web02 | 2.8.140721.1 | Last Updated 3 Feb 2008
Article Copyright 2008 by brian_agnes
Everything else Copyright © CodeProject, 1999-2014
Terms of Service
Layout: fixed | fluid