Announcing Real-Time Analytics with SQL Server 2016: Our Performance Test Results

By Dan Kogan 10 Mar, 2016

Today, Microsoft announced the launch of its latest version of its SQL Server product, SQL Server 2016. Microsoft asked us to run some early performance tests on Tableau and the results were pretty impressive, so we thought we’d share a little more detail with you here.

What Did We Test?

We ran through a subset of the TPC-H battery of tests, which are designed to “examine large volumes of data, execute queries with a high degree of complexity, and give answers to critical business questions,” comparing SQL Server 2016 and its new Clustered Columnstore Index for transactional databases vs. SQL 2012 and a row store schema. We also used the flight info data set—that’s a 285GB data set with the largest table containing 157,089,894 flight records—to compare SQL Server 2016 Clustered Columnstore Index vs SQL Server 2014 row store schemas.

What Were the Results?

Overall, we saw an average query performance increase of over 190 percent, but the longest-running, most complex queries were also the ones where we saw the biggest gains. The longest-running query ran in 3.83 seconds in SQL Server 2016 compared to 3048 seconds in SQL Server 2014—that’s a 795x performance improvement! Here is the full set of test results by query:

What Does This Mean to You?

If you use SQL Server as your data source, you’re going to be able to explore your data much more quickly. But even more than that, you can actually do real-time analytics against your operational data store running SQL Server 2016 as well whereas in the past, it would have been much more likely that data was being moved from the transactional store to another data store optimized for reporting and analytics.

Caveat: Your Mileage Will Vary

While these are great results, do keep in mind: 1) not every query we tested saw a performance improvement and your unique scenarios will likely see different results than we saw here, and 2) the performance gains were most pronounced on large data sets. If you have a smaller data set, the performance gains may not be as noticeable for you. That said, we’re impressed with what we’ve seen from this latest version of SQL Server and see great increasing value for the many joint Tableau and SQL Server customers.

To learn more about Tableau and SQL Server, see our SQL Server technology page.


Submitted by Ben Sullins (não verificado) on

This is great news for SQL Server users! I would love to see this test done comparing SQL 2016 with columnstore indexes to a true columnar db like Vertica though. Even maybe against Redshift or one of the cloud providers.

Submitted by Ivan (não verificado) on

Dan .png? Are you kidding?

Submitted by Vasil P. on

Where is the TripDetail query performance for SQL Server 2012?

Submitted by Vishal Sharma (não verificado) on

The dashboard is quite interesting to see the precise comparison between an old and new version of SQL servers with 285gb of data set, now that is huge!

Submitted by Matt C. on

Were traditional, tuned, nonclustered indexes used to cover the TripDetail queries on the SQL 2014 version of the data? Obviously the clustered columnstore index would be more flexible in terms of a dynamic ad-hoc analytic workload, but for a dataset as large as this, I'd certainly expect the DBA to have built indexes to support Tableau workbooks trying to analyze it.

Submitted by Evan Schmidt (não verificado) on

Hi Dan,
Can you direct me how to get an answer to the following question?

When Tableau is running with Integrated Active Directory Security, with a Live Connection to the SQL 2016 database, will Tableau pass the user information back to SQL server so that SQL Server can apply the integrated "Row Level" Security? (Row level security is a new feature in SQL 2016)

thanks, Evan

Submitted by Evan Schmidt (não verificado) on

Great news!! Thanks for the quick response and glad to hear about your new role. Very cool.

From the blog: TLDR --> SQL Server RLS did simplify things substantially, and it’s faster.

Submitted by Sri (não verificado) on

Wow That's fast ! Agrees with Ben Sulivan they should test it out with the Redshift and other Columnar db's

Submitted by Mosha (não verificado) on

How big was TPCH dataset, there are different scale factors from 1 GB to 100 TB, which one did you use in in your measurement?

Submitted by Matt S. on

This is exciting news and with the lifting of some of the limitations on column-store indexes, we're getting closer to being able to query directly from the transactional database, assuming that it uses this architecture of course. Here's the latest from Microsoft on this:

The problem I run into is that I usually only get called in to put analytics on top of other systems. The holy grail is to have a transactional DB that can be queried rapidly for analytics and that will scale for "big data". If you want to use in-memory tables and columnstore indexes in SQL Server 2016, the transactional system will have to be architected that way in the first place (or overhauled heavily). That task is very often out of the hands of the "BI" development team although we're always free to make the case. Even then, you're taking a chance that it won't end up being fast enough and it will have been largely a waste. It looks very promising, though.

Submitted by Will (não verificado) on

Ermm... didn't SQL Server 2012 have column-store indexes - - what sort of improvement has been achieved in comparison to that?

Submitted by Matt S. on

In 2012, columnstore indexes were not update-able so you had to drop the index and rebuild it each time. That's a show-stopper on many large systems as the time it takes to generate the index can be very long and can cause resource contention. In 2012 they were also limited to non-clustered indexes which basically means that the index is separate from the data and thus not as efficient to work with as a clustered index, which is the data itself, stored in columnar format, compressed, sorted, filtered, etc...

2014 brought clustered columnstore indexes which are automatically updated when new data is inserted, existing data is updated, or data is deleted (stands to reason since a clustered index is the data itself). However, non-clustered columnstore indexes were still not update-able and they could not be filtered.

2016 introduces update-able and filter-able columnstore indexes and in-memory columnstore indexes (however, as it turns out, the in-memory ones cannot be filtered). There are also some other limitations such as columnstore indexes that are on in-memory tables must contain all of the columns of the underlying table and the index must be applied when the table is created.

The bottom line is that they used to have a lot of limitations that were nearly impossible to work around and now they have a few limitations that are easy to work around.

Submitted by Matt S. on

Correction: I said above that clustered indexes are stored compressed, sorted, filtered, etc...

They cannot be filtered because a clustered index is the data itself and therefore is the entire set.

Submitted by Richard Chambers (não verificado) on

This is a useful infographic on 16 new features of SQL Server 2016:


non-humans click here