Hacking in the Window Aggregate operator with a rowstore nonclustered index (9 minutes)

Note: following this recording, Microsoft introduced batch mode on rowstore as part of SQL Server 2019.

This video has captions you may enable in the viewer, plus a written transcript below.

Can we get SQL Server to give us our data nicely pre-sorted in a rowstore index, but use that fancy new batch mode Window Aggregate operator on it?

Let’s see if we can trick SQL Server

I’m going to drop my real columnstore index. I had a real columnstore index before, that had real data in it, with all of those columns.

I’m going to use a hack now

We’re getting into undocumented territory that is not technically supported. So if I get my hack working, it’s possible that maybe I apply a cumulative update and my hack stops working.

Because this is not supported, this is not documented (by Microsoft), this is not necessarily reliable.

But the idea is, I’m going to create an empty filtered nonclustered columnstore index. I have the same nonclustered columnstore index definition right here.

I’ve added a where clause that simply cannot be right: FirstNameId cannot be equal to -1 at the same time it’s equal to -2.

It just can’t. No rows can go into this nonclustered columnstore index.

This is a hack that Itzik Ben-Gan wrote about as well. The idea behind this hack is I want batch mode, and I want those cool new operators.

But maybe the table isn’t otherwise suitable for maintaining a columnstore index. If I create an empty columnstore index, maybe I can trick the optimizer.

As I said, not supported, behavior could can change at anytime, your mileage may vary. But will do what we want?

Now, I’m going to look at an estimated plan

OK, I have that empty columnstore index, what do I get? !!! I don’t have the rowstore index, remember I just have the clustered index. Well it says – Let’s go ahead and run this– it says, I’m going to do the clustered index scan. And, I’ve got to do this sort, but I can still do the new window aggregate operator.

So what we’re comparing is clustered index scan and sort plus window aggregate operator to… we did it with a real columnstore index last time.

If I run this for real, hey, that was actually faster! That was only four seconds using the rowstore index with the magic new operator. Even though we had to have this sort.

If we look at our sort here, this is a batch mode operator, so the actual time stats are only for this operator. This sort took two seconds.

Okay, the sort IS taking a big chunk of our time.

The window aggregate operator took fifty eight milliseconds

(!!!) How is that possible? It’s seriously like, how is it so fast? It’s magic! Magic.

It’s also batch mode, of course, and…we have more things to do, but we’ve just saved so much time on the whole windowing part that like “woooo!”.

All right. Let’s try to get rid of that sort!

I’m going to recreate my rowstore index that sorts the data by my partitioning columns, as well as my order by column, and then it also covers as well. In the past, it wouldn’t use this for @Threshold = 500. If I do control + L to look at my estimated plan, once again, I’m back to this slow window spool operator.

I’m not, you know, this is our row mode window spool operator. I do have some operators farther up in the plan going to batch mode, but not the ones where it actually slow.

What if – is this is a given with my query, or is it just something about my query?

Let’s change our @Threshold

I’ve been using @Threshold = 500 so far. What if I say @Threshold = 1000?

An estimated plan again, and sure enough, if I change my @Threshold it decides: oh I can use this rowstore index! And now that my estimated rows I’m going to deal with have changed somewhat, okay I’ll give you your window aggregate batch mode operator!

So it’s not that it CAN’T do it, there’s just something about the way my query was optimizing with the costs, and the row estimates where it didn’t think it was worth it. Well, I think it’s worth it! Right?

Let’s go ahead and run it for @Threshold = 1000

I got my results back in three seconds. Now my results are different because I changed my threshold– we can see that in Alabama the name Skyler passed the threshold in 2015. There were at that point a total of one thousand four Skylers, whereas the previous year there had been only eight hunder and fifty seven. Harper, also the same year. I always think this is interesting. Jaylen: names that I don’t know, but I like.

Where exactly is the estimate tipping over? Six hundred eighty nine and six hundred ninety is where it changes.

If I do an estimated plan at six eighty nine, I have row mode window spools. If I do an estimated plan is six ninety, I have window aggregate operators.

Well, I always want that window aggregate operator! I’m going to add a hint.

And, in this case there’s a bunch of different hints I can try. I’m going to say, in this case I want you to use the legacy cardinality estimator for this query.

By default it’s using what’s called the new cardinality estimator in SQL Server that got added in 2014. We got you the new cardinality estimator I believe it was 2014, someone correct me if I’m wrong (note: I was right!). My memory of dates is not that great, but I’m saying: go ahead and use the old one. The one that’s SQL 2012 and lower.

I don’t want to change my compatibility level for the database. Changing my compact level changes a lot!

I don’t want to change my legacy cardinality estimation for the whole database, either. I just want to hint for this query, I don’t want to impact anybody but me.

I could use other hints…

In fact, the MAXDOP = 1 hint gives us the window aggregate operator in this case, but I still want parallelism to be possible for this. I’ve chosen to use legacy cardinality estimation.

There are other hints you can use. You could test out and play around and see, in terms of the other ones, what they do for you, that’s always kind of fun. I could have used an ‘optimize for’ hint.

But now I’m going to rerun with @Threshold = 500.

I’m going to look at my plan.

It did decide to go single threaded. I didn’t force it, it decided to do it.

But, I have my batch mode window aggregate operator.

I have my rowstore index. It decided to use my rowstore index, give me the new batch mode window aggregate operator, there is NO SORT in here, right? Data is pre sorted in this rowstore index.

And look at my timing here. Two hundred ninety seven milliseconds for this operator. That’s for the operator, which is batch mode.

Well, how long did it take to get the data out? Actually pulling the data out of the rowstore index took two-point-six seconds and is single threaded.

So we could play around more to see if we could get it all to go parallel.

We are much faster than we were at the beginning!

I can even run this for @Threshold = 1, and with that optimization hint in there– and this is going to return all of the rows. Names only get reported in here they have five names or more, and I’m saying @Threshold = 1.

This one, even when I say @Threshold = 1, I still get the fancy new operators, and we’re still finishing in six seconds, even though we’re displaying three hundred and nine thousand rows.

It’s faster than the twenty seven / thirty seconds we started out with.

That new operator is pretty awesome.

Curious about the TSQ?

We’re going to clean it up now. At the bottom of your script, if you’re wondering about why do you have two CTEs in there? Could you combine them?

At the bottom of the script I walk you through why there’s two CTEs in there, just playing around with the windowing functions. If that was one of your questions where you’re like, “how come?” Someone usually asks that question when I have two CTEs, and the answer is in the script.