Tag Archives: performance

Improving async performance for SqlDataReader

In addition to the Idle Connection Resiliency feature that was added in .NET 4.5.1, the other major “feature” was a massive boost in async performance for SqlDataReader. From our own internal testing we have seen from a 50% performance improvement (for default command behavior) to 180% (for sequential access). This brings async performance to within 20-30% for the speed of sync methods, except that our performance tests are using the async methods wrong – and using it the right way can make async faster than sync.

Making the right calls

In our internal performance testing when we test our async methods, we use async for everything – opening the connection, executing the command, reading rows and get the value for each column. In the official post I wrote for .NET 4.5 I had said that this was not recommended since calling ReadAsync in default command behavior buffers the entire row into memory, so subsequent async calls to read column values is a waste of CPU. Just by switching the calls from GetFieldValueAsync to GetFieldValue I was able to get our performance test running faster with async calls than sync.

This greatly simplifies the rules that I had previously written – always call NextResultAsync, ReadAsync and either call GetFieldValue for default command behavior or GetFieldValueAsync for sequential access

Checking ahead

If you have a look at the code for ReadAsync you can see that there is a lot of “infrastructure” that needs to be set up in order to do an async call, and yet it will never be used if all of the data to read that row is currently available. So, in .NET 4.5.1, we introduced an internal method called WillHaveEnoughData that checks if we are guaranteed to have enough data in the current buffer to satisfy the next request (be it reading a header, column or entire row). In order to do this, we have to make a few assumptions:

Full speed ahead

If you read through the code of WillHaveEnoughData you can see the full set of assumptions, checks and optimizations that have been made. To summarize the code, the way to get the best performance out of the improvements made to SqlDataReader is to ensure the following:

  • Use SQL Server 2008 R2 or later – this introduced a feature called Null Bitmap Compression (NBC) which allows the server to specify which columns are null for a row in the row header instead of setting the column to null in the column’s header.
  • Avoid using NTEXT, TEXT, IMAGE, TVP, UDT, XML, [N]VARCHAR(MAX) and VARBINARY(MAX) – the maximum data size for these types is so large that it is very unusual (or even impossible) that they would happen to be able to fit within a single packet.
  • Keep the maximum size of variable column as small as possible – as I mention above, we assume that variable sized columns will be the maximum size permitted by that column (so a VARCHAR(50) is assumed to always be 50 characters). This can make a huge difference in your applications performance – reducing a column from a VARCHAR(8000) to a VARCHAR(250) or VARCHAR(50) can be the difference between always creating and disposing the async infrastructure and rarely creating it.
  • If you are moving large amounts of data, consider increasing the Packet Size specified in the connection string (NOTE: This will increase the amount of memory required for each connection).
  • For sequential access, make sure to read each column in its entirety before moving to the next column (or next row), especially when reading large columns (otherwise the check for the amount of data needed will include the amount of data left for the current column and, for [N]VARCHAR\VARBINARY(MAX) types, if there is any leftover data at all then we will assume that it will not fit in a single packet)

Hopefully this gives you a good idea of the performance improvements made in .NET 4.5.1 and how to get the most out of them.

Tagged , , , , , ,

GetType() is a Scalability Issue

TLDR; Avoid SafeHandles and Native\Managed transitions in scale-up applications (e.g. web servers, middle tiers), such as calls to GetType(). Additionally, you won’t see these types of performance issues until your application has high loads (i.e. when the Garbage Collector kicks in).

All the CPUs!

Recently, the Entity Framework team was investigating a customer’s performance issue, namely that they couldn’t get 100% CPU usage using EF on multiple threads, no matter how many threads they were using. The EF team had verified that the customer was using threads correctly (avoiding shared state, especially DbContext objects) and that the application was not being limited by IO (the test was run against a local SQL Server using a small enough data set that the results would be cached). They gathered some profiles from the customer, and saw that a large amount of time was spent waiting on Critical Sections in System.Data.SqlClient, so they enlisted my help.

As it turned out, the Critical Section was a red herring – a side effect of the way that we implemented synchronous network calls for MARS. However, there was another event that a lot of threads were waiting on which appeared to be reset when Garbage Collection started and set when it completed. Why were these threads waiting for the GC to complete? Because they wanted to disable being preempted by the GC and you can’t do that while the GC is running.

Clearing the table while the customers are still eating

In a managed application the Garbage Collector deals with memory management, however it needs to be able to do this while the program is running in a thread-safe manner. The typical solution to this is to simply have the GC pause all managed threads in the application while it collects and then resume them afterwards. The key in that above sentence is “managed thread”, there are times in a managed threads life that it has to delve into native code, such as when using PInvoke or using Managed C++ to call into Native C++ (as is the case for SqlClient). In this situation the thread needs to tell the runtime that it is switching to being a native thread, and a side effect of this is that the thread can no longer be preempted by the GC.

There is another situation where we don’t want the GC to interrupt us, and that is during a call to SafeHandle.ReleaseHandle(). The documentation for this method states “The garbage collector guarantees… that the method will not be interrupted while it is in progress”, and one of the things that can interrupt it is the GC itself.

Getting to GetType()

Walking further up the call stack, one of the methods that was disabling GC preemption were network read\write calls in SqlClient. However, the default packet size is 8Kb, so that should be enough data to keep the CPU busy between calls – we also reuse buffers, so there isn’t much stress on the GC. So these calls weren’t likely to be the issue.

The other call that kept appearing was GetType() – this was a surprise since I didn’t see why this method needed to disable the GC. As it turns out, every time GetType() is called it creates and disposes a SafeTypeNameParserHandle, which inherits from SafeHandle. Furthermore, there a number of calls to native methods and memory pinning. Put together, this thread puts a lot of stress on the GC and keeps disabling\enabling GC preemption. Even worse, as the customer added more threads, the GC was stressed more, and so the threads had to wait longer for the GC to finish.

Getting past GetType()

When checking type compatibility, there is no need to call GetType() – the as and is keywords are much faster (roughly 20x and 5x respectively, for single-threaded performance), they also take care of inheritance and interfaces. There are only two reasons why you may need to use GetType(): either you are checking that an object exactly matches a given type, or to see if an object has a particular method. For the first of these scenarios, why does it matter that an object is a specific type, or a sub-class? I honestly can’t think of a scenario where it would truly matter, unless you don’t trust the derived class to conform correctly to the contracts set up by the parent, which is a very unusual situation to be in. As for checking the existence of a method on an object, using reflection is probably a bad idea since the same named method doesn’t necessarily have the same purpose, consider Stream.Read vs DbDataReader.Read vs Interlocked.Read; or even more confusing Type.GetType vs Type.GetType – it is a much better practice to cast (via the as keyword) to the interface or class that has the method that you are looking for and use the casted object.

Worst case scenario, if you really can’t avoid it, then at least try to cache and reuse the result of GetType.

(As a side note, if you’re ever interested in seeing how the .Net Framework works, you can download and browse through the Reference Source)

Tagged , , , , ,