Tag Archives: disposable

SpotTheDefect[0].Answer[2]

Over the past few weeks I’ve been covering various ways to fix a race condition bug. Answer 0 was a very basic solution where we were using a lock to serialize access, and with Answer 1 we significantly improved the performance without sacrificing safety by using the “Copy, Check, Continue” pattern.

However, what you may have noticed from the code is I had a variable called _disposed that I was using it to indicate that we have nulled out _foo – but what if _foo was IDisposable? And we had to make sure that we don’t call any method on it after we disposed it?

But where is the fun in that?

The most obvious thing that we could do is to simply use Answer 0 – by putting locks around the  usage of _foo we can be certain that _disposed accurately reflects if _foo has been disposed or not, but again this is a trivial solution that has a performance cost.

An alternative is to implement Answer 1 and then to catch any ObjectDisposedExceptions that are thrown. This allows us to be “lazy” with our thread safety, and so get back some performance. The problem with this solution is that we can’t be sure that using _foo will throw an ObjectDisposedException: it is entirely possible that _foo will null out one of its fields and we would end up with another race condition similar to the one we were originally trying to solve; or it may just throw the wrong exception type (for example, using a closed\disposed SqlConnection will result in an InvalidOperationException).

Teamwork is key

What makes Answer 0 safer compared to Answer 1 is that both threads are aware of what each other are doing, and they are coordinating their actions. If we were simply concerned that access to _foo would be serialized under a lock, then we could switch to a ReaderWriterLockSlim, and have Thread1 obtain a ReadLock and Thread2 obtain the WriteLock. However, if there are few threads trying to access _foo at once, then you’ll probably find that an uncontested lock provides better performance than the ReaderWriterLockSlim.

An alternative to this is to implement similar mechanics to a ReaderWriterLockSlim but using faster primitives without any locks. Firstly, there are two things our ReaderWriterLockSlim alternative needs to keep track of: the number of readers in the reader lock, and if the writer lock is in used – this suggests that we will need an integer for the readers, but a bool for the writer. Secondly, if the writer lock is held, then no readers may enter the lock; but the writer can not do its work until all readers have finished reading. So, putting this together, we already have our writer lock bool (_disposed), and we can introduce the int for the readers (_activeReaders). Readers must then Increment _activeReaders when they start, and decrement it when they end – but must not do any work if _disposed is true. Similarly, the writer set _disposed to true when it starts and then waits for the readers to finish (i.e. _activeReaders becomes 0).

There are a couple things that you should note about this solution:

  • We expect there to only ever be one thread “writing” at a time (otherwise you need to synchronize access to _disposed – or switch it for an integer, increment it and then only disposed if it equals 1)
  • Once we have “written” no other locks can be taken again
  • I’ve used Interlocked.Increment\Decrement to manipulate _activeReaders (this is because incrementing\decrementing is not guaranteed to be atomic)
  • I’ve had to disable warning CS0420: “a reference to a volatile field will not be treated as volatile”, but it is safe to do so because the Interlocked APIs are “volatile aware”.

So here is the revised solution:

using System;
using System.Threading;

namespace SpotTheDefect1
{
    class Program
    {
        private static Foo _foo = new Foo();
        private static volatile bool _disposed = false;
        private static volatile int _activeReaders = 0;

        static void Main(string[] args)
        {
            Thread thread1 = new Thread(Thread1);
            Thread thread2 = new Thread(Thread2);

            thread1.Start();
            thread2.Start();

            thread1.Join();
            thread2.Join();
        }

        private static void Thread1()
        {
            // Check first to avoid unneccesary work
            if (!_disposed)
            {
                // Warning CS0420: a reference to a volatile field will not be treated as volatile
                // We can safely ignore this because the Interlocked APIs are volatile aware
                #pragma warning disable 0420
                Interlocked.Increment(ref _activeReaders);
                #pragma warning restore 0420

                try
                {
                    // Check again in case we were disposed after doing the increment
                    if (!_disposed)
                    {
                        _foo.Bar();
                    }
                }
                finally
                {
                    // Warning CS0420: a reference to a volatile field will not be treated as volatile
                    #pragma warning disable 0420
                    Interlocked.Decrement(ref _activeReaders);
                    #pragma warning restore 0420
                }
            }
        }

        private static void Thread2()
        {
            Console.WriteLine("Disposing");

            // Indicate that we are disposing and then wait for readers to complete
            _disposed = true;
            SpinWait.SpinUntil(() => _activeReaders == 0);

            _foo.Dispose();
            _foo = null;
        }

        private class Foo : IDisposable
        {
            public void Bar()
            {
                Console.WriteLine("Hello, World!");
            }

            public void Dispose()
            {
                Console.WriteLine("Foo has been disposed");
            }
        }
    }
}
Advertisements
Tagged , , ,