Explore the slow query's execution plan (15 minutes)

The secrets are in the execution plan. Here’s how to explore it!

Nested Loop fun in execution plans

You might wonder: why did the “bar” coming out of the Index Seek at the bottom of a nested loop show 6016.2 estimated rows, while the “bar” coming out of the Key Lookup (Clustered) showed 1 estimated row?

Answer: both of these bars showed the number of estimated rows per execution.

In the case of the Index Seek, SQL Server thought it would have to execute that seek only once and bring back 6016.2 rows.

In the case of the Key Lookup (Clustered), SQL Server thought it would have to execute that key lookup 6,016.2 times (one for each row).

In both cases, to see the estimated and actual number of executions, you need to view the tooltip or properties for the operator– not just the bar between operators.

Finding out which statistics were used by an execution plan

I don’t show this in the demo, but I secretly used some trace flags to validate which statistics were loaded when SQL Server compiled the execution plan for my slow query.

As of SQL Server 2017, we’ll have the ability to see statistics used to compile the plan in some execution plans themselves. I used trace flag 2363 as described in this post by Paul White (see the post as well as the comments): http://sqlblog.com/blogs/paul_white/archive/2011/09/21/how-to-find-the-statistics-used-to-compile-an-execution-plan.aspx

Transcript

What is up with the query and why is it slow?

I’ve run the query against a cold cache with actual execution plans enabled.

Let’s take another look at that execution plan. This is the slow nested loop plan that we get after creating the nonclustered index.

I’m just going to hover over this line {bar/pipe} that goes down to the Index Seek against our new index. Let’s take a look at what we’ve got here.

SQL Server is estimating that it’s going to get 6,016.2 rows back for all the babies named Matthew…

But this estimate is a little bit low. Okay, it’s WAY low. We actually have more than 1.4 million rows for Matthew.

There are a lot of Matthews, but SQL Server vastly underestimated how many there were and this carries through the plan.

If I look at my lookup over here …

Now if you hover over the line, it might make you think oh it only estimated one row

This is a little misleading on our Key Lookup. I’m going to click on the Key Lookup itself and zoom in. For this Key Lookup it’s important to look at the estimated number of executions and the estimated number of rows on this one.

On the Key Lookup, for every time it’s running it’s only getting back the one row. So it did estimate I will get 1 row back each time, I think, but I’m going to have to go fetch it 6,016.2 times. I actually had to execute this 1.45 million times.

It’s the same estimate. It’s just that this operator we have to just look at that bar/pipe, because the bar/pipe isn’t showing us all of the details for it. We can see this in the properties as well. The tool tip is just the selected properties that it’s showing for this operator. Well what’s up with that?

Why do we have this low estimate? Are our statistics wrong?

SQL Server uses statistics, these lightweight objects that help it figure out for a given piece of information about how many rows do we have. It seems like SQL Server doesn’t know much about our Matthews.

Let’s dig into this a little bit.

We can ask SQL Server what are the statistics on a given object

I’m looking at a couple DMVs here.

I’m looking at sys.stats as well as sys.dm_db_stats_properties, which lets us query and say, “Tell me a lot of info about my statistics.” I am just looking at stats on the big table with all those rows.

Essentially I want to know the stats on that one, because that’s the table where when the query was faster when it was doing the clustered index scan. When it was slower, it decided to use that nonclustered index.

Listing out our statistics on that table, we actually have a couple of statistics for the FirstNameId column, because I queried the table before I created an index on FirstNameId and I have the default database property of Auto Create Statistics on. When I was querying the table, it said, “I should automatically create a column statistic for you, because you’re joining on FirstNameId.” Then when I created the index, every time I create a disk based, rowstore index, I’m going to get statistics associated with that. So, I have an index statistic on FirstNameId as well.

Scrolling to the right and looking at this statistic, I can see the issue isn’t that my statistic didn’t look at all the rows. The number of rows sampled in this statistic is the same as the number of rows in the table, and it’s not like data has been changing since this statistic was created.

SQL Server did use this statistic and this index, when it compiled my query.

Let’s dig into that statistic and look at where it’s getting this super low estimate

I’m going to use the command DBCC SHOW_STATISTICS and pop in the name of the table and then the name of the statistic, which in this case is just the name of the index. It’s an index related statistic. I can see a lot of info about this statistic, which is really cool. I love that SQL Server shows this to us.

We’ve got three parts of the information that we can see here. There’s three datasets returned.

  1. The top here is the header of the statistic.
  2. Then we have the density vector for the statistic.
  3. And then finally this third dataset down here is the histogram of the statistic for the first column in this statistic.
    1. In this case, that’s FirstNameId.
    2. I get detailed info for what says RANGE_HI_KEY here.
    3. I can get up to 200/201 steps in this histogram, and these are the FirstNameIds along with information on how many rows there are equal to that FirstNameId.

I happen to know that Matthew’s FirstNameId is 28,073

Let’s find, in this histogram, what SQL Server knows about 28,073. By luck, or by craft, we actually have that exact value in our histogram. It can’t have a step for every row. We only get up to 201 steps, but we are lucky and Matthew actually has a step in the histogram and check this out.

SQL Server knows that for Matthew’s FirstNameId it has 1.45 million rows. Let’s draw in the commas, just so it’s really clear here.

This was not our estimate.

SQL Server is not using the histogram for this

Here’s where that six thousand and change estimate is coming from. It IS using this statistic.

There are two numbers that it’s using here.

It is using this column in the density vector. The density for FirstNameId basically means: “For any given FirstNameId in the table, if I was just to pick a random FirstNameId, about how many rows would it have?”

We have some FirstNameIds that are going to have very few rows and then we have some that are like Matthew that are going to have a lot, but like for your sort of average, medium, mediocre FirstNameId, about how many do they have?

That is what the density number helps us with and to get that number we multiply the number in the density vector times the number of rows in the table, which we can see in the header of the statistic.

At the time that it updated the last statistic, here’s how many rows it had and it did actually look at: every single row when it was crafting out this information. These two numbers right here,

I’ll show you them all on the screen at the same time to prove I have the number, our ‘all density’ number is 3.77E-05. That is our ‘all density’ number. Then let’s look at that header again. Here is our number of rows. When we multiply the density number times the number of rows, sure enough we get back 6,016.2, because it rounds the number, as it’s shown in the execution plan.

In other words, SQL Server didn’t look up the FirstNameId for Matthew

It, when it was processing that join, said: “Okay I don’t know what the FirstNameId is, I’m going to use 6,016.2.” And what we’re going to see is that SQL Server will use this estimate even for a name who has far fewer rows than Matthew.

We’re going cold cache is how we’re tuning this and testing this, so I’m going to tell SQL Server to abandon the buffer pool again.

Now I’m going to run our procedure for a different FirstName

Show me all of the ‘Fay’s. I’m using this WITH RECOMPILE at the end of the command, to say: “When you execute this stored procedure don’t use the cached execution plan .” If I don’t say WITH RECOMPILE, I’ll reuse Matthew’s plan, if I run it before and not changed anything on the table (like adding an index or modified a lot of data that might autoupdate statistics or something like that).

So we’re saying, give Fay a fresh plan, and don’t let anyone else reuse it either. Our actual execution plans are on.

And wow, Fay is super fast.

When we look at Fay’s execution plan, Fay got this same nested loop plan that is so slow for Matthew, but it’s really fast for Fay. The reason that it’s really fast for Fay is there’s only 508 rows that it has to deal with for Fay. When it gets to the point out here of doing this Key Lookup, it’s not doing it four million times and more. It’s doing it 508 times. Nested loop operators are great when you don’t have to execute them four million times, when you’re only doing it like 508 times. This is actually a great plan for certain values of that parameter, but check out the estimate.

It didn’t craft this plan for Fay. It used the 6,016.2 estimate.

It didn’t know Fays’s FirstNameId any better than it knew Matthew’s. This just – Fay happens to be closer to an average amount of FirstNameIds for that table.

Let’s compare some estimated execution plans

This first one here is basically our exact same query that we’ve been looking at. It’s not a stored procedure anymore and we’re not using a parameter. I have simply said okay, I want you to use the literal value of Matthew here.

So we should see our familiar plan. I’m just going to do control + l. control + l is the same as … Oh, let’s actually highlight the right button here. control + l is the same as this icon, which is ‘Display estimated execution plan’. Now I get just my, “what would I do if I ran this query?” without running it.

This is our familiar plan for Matthew. If I look at the cost, this is the slow plan for Matthew, it’s the same old cost. This is what the stored procedure’s est … And cost is always an estimate. 19.75. If I look over at the estimate here, we don’t have any actuals, because this is just “what would you do?” / estimated plan, but our familiar estimate.

Now let’s compare this to a rewrite

I’ve rewritten this query so that I have specified the FirstNameId. I still have my join in here and I have even put the predicate where fn.FirstNameId equals the value and this is Matthew’s FirstNameId, but SQL Server’s pretty clever.

It looks at this, sees equal and it knows: hey you’re joining on these things, so if you’re requiring the FirstNameId over here be this integer, that also means it has to be the same value on the other side of the join, because it’s an inner join. This has a different plan. I’m just going to do, once again, control + l for estimated plan.

Well, that’s a different shape right away! And I have a big green index hint at the top, too.

SQL Server’s like hey, you could improve on this index. You could improve on it by a lot, by 99.2%. Now there’s some problems with this. It’s not suggesting I modify my index. That’s a create index command, which is basically going to be a {near} duplicate, but at least we’ve gotten a little smarter, right? Let’s look at our cost.

Our cost for this, hey we have realized this is not a cheapo 19.2 query. This is going to be more work. When we go over and look at the Clustered Index Scan, it decided not to use the nonclustered index. It decided not to use all those Key Lookups. Instead it said: oh I better scan this clustered index, because hey, look at the estimated number of rows!

It used the histogram this time and it said hey, Matthew’s got … There’s a LOT of Matthew’s. For this FirstNameID there’s approximately 1.45 million rows for this, and I don’t want to do that many nested lookups for that.

The reason that it did this, though, is because we gave it the FirstNameId

It might sound like SQL Server’s being dumb here, but this is actually fair. Let’s talk a little more, back in the slides, about whey SQL Server didn’t have this information in our stored procedure.