Is the GetOrCreate method thread-safe?

ASP.NET Core Memory Cache investigated, including an interview with the creator of LazyCache.

📸: Soumil Kumar / Pexels
📸: Soumil Kumar / Pexels Vis mer

In this blog post I will be researching a common question raised on forums like StackOverflow: "Is the GetOrCreate method thread safe?".

This blog post also contains an interview with the creator of LazyCache.

Introduction

In my current work project I'm building an API using ASP.NET Core 2.2.

Adding MemoryCache to an ASP.NET Core app is easy, just add the Microsoft.Extensions.Caching.Memory nuget. It's recommended over System.Runtime.Caching, and works natively with ASP.NET Core dependency injection, according to the documentation.

I must admit that I don't have a lot of experience with ASP.NET Core on customer projects, so the first thing that strikes me is that the MemoryCache method GetOrCreate/GetOrCreateAsync is new. Intuitively, my first guess is that it replaces the AddOrGetExisting method from MemoryCache in the .NET Framework (full framework).

Also, the AddOrGetExisting method doesn't return the value if the key doesn't exist in cache. GetOrCreate does return the newly cached value, so that seems like a more useful approach, and the reasoning for the name change.

The AddOrGetExisting method from the .NET Framework is thread-safe (according to the documentation).

Premise:

My expectation is that the MemoryCache in ASP.NET Core behaves the same way as MemoryCache in the .NET Framework.

Unexpected behaviour

After creating some unit tests for the cache, I quickly discover some inconsistencies. The returned values are not equal when executing in parallell.

Take a look at the following console app, and try to picture the output:

    class Program
    {
        static void Main(string[] args)
        {
            var cache = new MemoryCache(new MemoryCacheOptions());

            int counter = 0;

            Parallel.ForEach(Enumerable.Range(1, 10), i =>
            {
                var item = cache.GetOrCreate("test-key", cacheEntry =>
                {
                    cacheEntry.SlidingExpiration = TimeSpan.FromSeconds(10);
                    return Interlocked.Increment(ref counter);
                });

                Console.Write($"{item} ");
            });
        }
    }

Output:

2 3 5 3 3 3 3 1 4 3

The output will vary, but it will NOT be a line of equal numbers.

I did the same thing for .NET Framework, and the output there is equal (except the initial value as mentioned in the introduction), meaning the behavior is different.

What is happening here?

Scott Hanselman has a nice feature Eyes wide open - Correct Caching is always hard, where the conclusion is that the GetOrCreate method has no guarantee that the factory method (cache miss) won't be called several times. There is no locking!

Should I accept this?

Another well known industry figure, James Newton-King, responds to this article with the following comment:

Simpler caching, i.e. not getting the data inside a lock, is fine for a lot of people but if your website is high traffic and the data is expensive to get then every person who visits while it is uncached (either because your app is starting up or the cache expiried) will get their own copy.1000 visitors/sec 5 secs to get the data = 5000 requests to get the data.On the other hand with a bit of locking you reduce that to 1 request.

It's imperative that my application handles a lot of requests per second.I don't have the privilege to let my backend suffer unecessary.

So which implications does this have for my application?

Is the GetOrCreate method thread-safe?

Further investigations and Google searches reveal a commom question raised:

Is the GetOrCreate method thread-safe?

Wikipedia defines thread safety as:

Thread safety is a computer programming concept applicable to multi-threaded code. Thread-safe code only manipulates shared data structures in a manner that ensures that all threads behave properly and fulfill their design specifications without unintended interaction

  • Thread safe: Implementation is guaranteed to be free of race conditions when accessed by multiple threads simultaneously.
  • Conditionally safe: Different threads can access different objects simultaneously, and access to shared data is protected from race conditions.
  • Not thread safe: Data structures should not be accessed simultaneously by different threads.
«I'm not quite sure how to interpret this defintion.»

I'm not quite sure how to interpret this defintion. ASP.NET Core MemoryCache can be accessed simultaneously by different threads and the threads do not interfere with each other directly. Still, the GetOrCreate method has clearly no guarantees to be free of race conditions.

Is this a bug?

The following issue has been reported at the aspnet/caching github-repo:Is GetOrCreate thread-safe? (The issue has been closed without a good explanation.)The issue discusses whether the cache is thread-safe or if it is a race condition, and if this is by-design or not.

Reading the ASP.NET Core documentation more thoroughly I actually find some words about this at the very bottom, under Additional notes:

When using a callback to repopulate a cache item:
* Multiple requests can find the cached key value empty because the callback hasn't completed.
* This can result in several threads repopulating the cached item.

So, it actually seems that this is by-design.

Why is this? Why has this changed from the .NET Framework MemoryCache? Is the cost of introducing locks too high for most scenarios? The documentation doesn't say. I've asked around, but no one seems to know about it. Why isn't it clearly communicated?

It does confirm though, that my inital premise was wrong:

The behaviour of the MemoryCache is different in ASP.NET Core than in the .NET Framework.

A solution

I still need my cache to guarantee that the cacheable delegates (cache miss) are only run once, because I expect high traffic volumes.

There is a caching service called LazyCache which is thread-safe and guarantees that the cacheable delegates (cache miss) only executes once. It's not a part of the ASP.NET Core framework, but it's available using nuget.

Re-writing my previous example to use LazyCache:

    class Program
    {
        static void Main(string[] args)
        {
            IAppCache cache = new CachingService();

            int counter = 0;

            Parallel.ForEach(Enumerable.Range(1, 10), i =>
            {
                var item = cache.GetOrAdd("test-key", cacheEntry =>
                {
                    cacheEntry.SlidingExpiration = TimeSpan.FromSeconds(10);
                    return Interlocked.Increment(ref counter);
                });

                Console.Write($"{item} ");
            });
        }
    }

Output:

3 3 3 3 3 3 3 3 3 3

All numbers are equal, each time I run the application.This is how I expected it to work.

I highly recommend using this package when you need a cache without race conditions.

Interview with Alastair Crabtree

To get a better understanding of the questions raised in this blog post, I forwarded them to a man that knows the inner workings of MemoryCache, the creator of LazyCache, Alastair Crabtree.

1. What is LazyCache, and what does it offer ASP.NET Core users?

Alastair: LazyCache is a tiny library that wraps and extends the Microsoft in-memory caching library. It's not particularly long or complicated to do caching just using the Microsoft library, but after writing the cache aside pattern with locking into a few apps I decided to abstract it out into LazyCache. It helps developers add performant in-memory caching, without race conditions, in as few lines of code as possible. Most of the time I use caching I am in a high concurrency environment and so the extra locking in lazy cache is required.

2. In ASP.NET Core Memory Cache, is the GetOrCreate method thread-safe?

Alastair: Yes it is, but this is an area that always seems to trip up developers using it for the first time when they get unexpected results like yours. Multiple threads can read and write from the MemoryCache at the same time and they will not get obscure exceptions because of broken internal state, hence the GetOrCreate method rather than just Get and Create. This is different to say a Dictionary<,> or List,<> a where concurrency will cause some nasty exceptions because it is not thread safe, and is the reason why we need the concurrent collections.

So it is "safe" for multi threaded apps like web sites, but that does not mean it guarantees to only execute the delegate to prime the cache only once. The common race condition that is often not expected is when two threads call GetOrCreate at the same time then they can both miss the cache and so both trigger the delegate to prime the cache at the same time. If the delegate tasks a while to execute (the while reason you probably want to cache it) then lots of requests could miss the cache. To fix this LazyCache uses more aggressive locking to ensure the delegate only fires once.

«The original predates lambdas so that's why - it's old!»

3. Why do you think the GetOrCreate method differ from the AddOrGetExisting in .NET Framework?

Alastair: For dotnet core the caching layer got a full rewrite and so the APIs changed as a result. The new versions are much better because they support passing Lambda functions as delegates to prime the cache on a cache miss, rather than having to pass the object you want to cache itself. The original predates lambdas so that's why - it's old! LazyCache always used delegates so I guess you can say the new APIs are more like LazyCache now. I think in the end they also shipped a drop in replacement that supports the old APIs on core to help people porting to core, but you should only use that when doing lift and shift to core.

4. How well has this change been communicated? Have you been in contact with the ASP.NET Core team about this (asking them about race conditions in GetOrCreate)?

Alastair: No I've not, but it's a pretty well documented challenge if you go searching and there is lots of discussion on StackOverflow. Most of the code in LazyCache is cobbled together from snippets on the web, including the really useful AsyncLazy by Steven Cleary. The common race condition does seem to trip lots of people up so maybe the core team could add more to the docs to help people.

5. The LazyCache.AspNetCore nuget has been in beta for 9 months now. How long until a release version?

Alastair: Yeah sorry about that! It's not really had any bugs filed so I really just need to tick the box and make it a full release. As I'm not using it at work currently I haven't given it much time.

Conclusion

  • The GetOrCreate method is thread-safe (using Alastairs definition of thread safe).
  • The GetOrCreate method is subject to race conditions which might be unwanted in a high concurrency environment.
  • For this to become a problem though, your application will require a high concurrent load and costly backend requests or that the backend doesn't handle too many simultaneous requests.
  • The race condition behaviour is documented by the ASP.NET Core team.
  • I will still argue that it's not clearly communicated, leaving developers puzzled by the behaviour.

References

The following are good reads on the subject:

  1. https://github.com/aspnet/Caching/issues/359
  2. https://github.com/aspnet/Extensions/issues/708
  3. https://tpodolak.com/blog/2017/12/13/asp-net-core-memorycache-getorcreate-calls-factory-method-multiple-times/
  4. https://stackoverflow.com/questions/20149796/memorycache-thread-safety-is-locking-necessary
  5. http://reedcopsey.com/2011/01/16/concurrentdictionarytkeytvalue-used-with-lazyt/
  6. https://blog.stephencleary.com/2012/08/asynchronous-lazy-initialization.html

Credits

A huge thanks to Alastair for responding to my questions. Please make sure to check out LazyCache: