Jan 1 2011

Dropping the locks

When your working on high performance products you'll eventually find yourself creating some form of a Near-Cache. If things really need to go fast, you may start thinking about going lock-free. This may be perfectly reasonable, but lets explore this a little to see where it makes sense.

For the purposes of this article, a Near-Cache is defined as any collection of state from durably persistent mediums that is maintained in-process and services requests concurrently. This may be a direct projection of state from a single medium or a custom View of state from many different mediums.

Typically, the implementation of Near-Cache patterns fall into a couple broad categories

Write-Once/Read-Many

  • As the name suggests this is a cache that is seldom written too, and spends most of its time servicing read/lookup requests. A common example would be maintaining an in-memory View of static (or seldom updated) state in a database to offset the expense of calling back to the database for each individual lookup

Volatile Read-Write

  • This type of cache typically has data written-to or altered as frequently as its read. In many cases, caches of this type maintain state that is discovered through more dynamic processes, although it could also be backed against persistent or other semi-volatile mediums

The benefits and/or consequences of selecting either pattern varies depending on the needs of your application. Generally speaking, both approaches will save costs as compared to calling-out to more expensive external resources. In C# there are several patterns which could be used to create a Thread-Safe Near-Cache. Exploring two samples to demonstrate opposite ends of the spectrum will help reveal their relative merits. Along the way I'll identify some criteria that may be useful in determining which is the most appropriate for a given scenario.

The Full-lock Cache

This pattern is suitable for either Write-Once/Read-Many or Volatile Read-Write caches

public static class FullLockCache
{
    private static IDictionary<int, string> _cache;
    private static object _fullLock;

    static FullLockCache()
    {
        _cache = new Dictionary<int, string>();
        _fullLock = new object();
    }

    public static string GetData(int key)
    {
        lock (_fullLock)
        {
            string returnValue;
            _cache.TryGetValue(key, out returnValue);
            return returnValue;
        }
    }

    public static void AddData(int key, string data)
    {
        lock (_fullLock)
        {
            if (!_cache.ContainsKey(key))
                _cache.Add(key, data);
        }
    }
}

Benefits
  • The locking and Thread-Safety semantics are easily understood
  • Unless you have high numbers of threads accessing the cache concurrently or have extremely high performance demands, the relative performance of this pattern can be quite reasonable
  • This can be relatively easy to test and could be made easier with subtle extensions
Consequences
  • The consequences of this cache pattern would be the full lock placed on each call. This can start to manifest as higher contention rates for applications that have large numbers of threads or fewer threads with very high performance demands

The Lock-Free Cache

This pattern is only suitable for a Write-Once/Read-Many cache

public static class NoLockDictionaryCache
{
    private static IDictionary<int, string> _cache;

    static NoLockDictionaryCache()
    {
        _cache = new Dictionary<int, string>();
    }

    public static string GetData(int key)
    {
        string returnValue;
        _cache.TryGetValue(key, out returnValue);
        return returnValue;
    }

    public static void LoadCacheFromSource()
    {
        IDictionary<int, string> tempCache = new Dictionary<int, string>();

        //TODO: Load Cache

        Interlocked.Exchange(ref _cache, tempCache);
    }
}

Benefits
  • No locking is required
  • Read performance of a cache is significantly improved
  • This can be relatively easy to test and could be made easier with subtle extensions
Consequences
  • Writes can be relatively slow when compared to other patterns. This is a result of reloading the entire cache, or cloning and then updating the existing cache
Variations
  • The Hashtable and ConcurrentDictionary types work very well in place of the Dictionary. These perform almost the same as the Dictionary

Tags: , ,