How to Tune Indexes for a Stored Procedure

on November 29, 2016

You’ve got an important stored procedure that you think needs index help– but it runs in environment with lots of other queries. How do you focus in and discover exactly what indexes need tuning for that procedure?

stopwatch

The best way to tune indexes in a stored procedure

The best way is to run the stored procedure yourself to generate and save an “actual” execution plan, which contains the estimates SQL Server used when it generated the plan as well as actual rowcounts, actual memory granted, etc. It will also contain a green tooltip with a “missing index request” if SQL Server thinks an index would help.

Tips on how to tune procedures with actual execution plans:

  1. If you execute the stored procedure from a free tool like SQL Sentry Plan Explorer, you can even save off the plans (which will have green missing index requests where SQL Server thinks they would help), along with runtime stats about CPU usage, reads, and duration.
  2. Runtime stats are really important and can save you tuning time. When looking at execution plans, remember that all references to “costs” are estimates. A query could have a high estimated cost, but only take 2 milliseconds of duration. Usually that’s not worth your time to tune!
  3. Most procedures are parameterized. You want to run the procedure with real world values for those parameters. You also want to test what happens when you run it with a different set of parameters and re-use the execution plan generated by the first set.
  4. Always look at those missing index requests as suggestions. You may be able to improve upon them, and they may not always show up when an index change is needed! Look for expensive operations in queries with a long duration and ask yourself if an index might help.

The biggest problem is that you can’t always run a procedure against production. Sometimes the procedure modifies data, sometimes it just impacts performance too much. You can work around this by restoring a recent backup to a different environment where it’s safe to test.

If you can’t get actual execution plans with runtime stats, there are a couple of alternatives

Nearly as good: enable SQL Server 2016’s Query Store feature. It will collect runtime stats (cpu usage, duration, reads, writes) and execution plans from production. They aren’t actual plans (no actual rowcounts, etc), but it’s still a lot of useful information. However, you have to be on SQL Server 2016, and Query Store is per-database – so you need to enable it in every database the procedure touches.

One way that sometimes works is to get the cached execution plan from memory in production along with the query execution stats (cpu usage, duration, reads, writes). You can do this using a free script like sp_BlitzCache from Brent Ozar Unlimited, or Glenn Berry’s free scripts from SQL Skills. However, the plan isn’t always in memory when you look, and these aren’t actual plans, either.

Methods I would avoid

I wouldn’t spend much time looking at the “Missing Index DMVs” for this particular problem. In theory, you can record what indexes SQL Server has asked for, let things run, and then see what requests are new. But in practice, I’ve always found this information to be too incomplete and frustrating, because it doesn’t tell you how long things took, SQL Server doesn’t always ask for an index, and the requests aren’t always perfect. I find it much easier to have the execution plans and queries for reference – ideally actual plans, but at least estimated plans with runtime stats.

I would avoid running a trace to collect execution plans in production. Whether with Profiler, Server Side Trace, or Extended Events, collecting execution plans unfortunately slows down your SQL Server– so you’re impacting performance for the server, and also skewing your measurement. (Vote up this Connect bug on the issue if you like traces and wish this was better. The bug is closed, but sometimes closed bugs are reopened and fixed.)