Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

SQL Server 2014 pushes the pedal to the metal

Sean McCown | March 21, 2014
SQL Server 2014 is a significant release with two overarching themes: cloud and speed -- or, to be specific, Azure integrations and in-memory OLTP (online transaction processing). Truth be told, I'm more excited about the speed features than the cloud stuff, but I also understand there is a growing portion of the customer base that is heading to cloud-based operations, and these shops will find the cloud features useful.

We'll start with what I consider to be the flagship feature of SQL Server 2014, Hekaton. It's Microsoft's flavor of memory-optimized tables: All you have to do is define a table as being memory-optimized, and the engine will take care of the rest. And unlike other in-memory database solutions, Hekaton allows you to bring individual tables into memory instead of the entire database. This follows a more realistic business scenario too, because you typically have only a handful of tables that need this kind of boost, and forcing the entire database into memory for the sake of a few tables isn't a good use of resources.

Hekaton achieves its turbo boost through a combination of optimized algorithms, optimistic concurrency, eliminating physical locks and latches, and oh yeah, storing the table in memory. If you have stored procedures that hit only the memory-optimized table, you can convert them into Hekaton procedures to get even more speed. This compiles them into native C code, which optimizes their efficiency.

There are limitations to the types of objects that can be converted, so you should read up and test your code before going whole hog with this solution. As a quick example, optimized stored procedures can't contain cursors, subqueries, or even CTEs (common table expressions). Those are a few examples in a long list, so you need to do your homework; expect database and table requirements as well. Given that this is a brand new feature, I would expect these restrictions to diminish over time.

What kind of speeds are we talking about here? Will it be enough to make a conversion worthwhile? I've seen some pretty impressive demos, and Microsoft's site boasts a 10- to 30-fold performance boost. The exact speeds depend on many factors of course, but from what I've seen in the demos and in my own testing, these are realistic numbers. The truth on the ground is that if you have a highly transactional OLTP scenario and it meets the requirements for Hekaton, you'll fall in love.

SQL Server 2012 introduced columnstore indexes for dramatically warehouse performance. I've personally seen queries that took several minutes with a traditional index reduced to subsecond times with the addition of a columnstore index. The problem was that columnstore indexes weren't updateable. In order to load data into the table, you had to drop the indexes first and re-create them afterward. SQL Server 2014 has solved that problem by making columnstore indexes updateable — nice job, Microsoft.

I always tell my junior DBAs that there are only two ways to resolve a bottleneck: Either reduce the workload or increase the throughput. While the previous features address the throughput, these next two features deal with the workload.

 

Previous Page  1  2  3  4  5  Next Page 

Sign up for CIO Asia eNewsletters.