How to Tune Indexes for a Stored Procedure

stopwatchYou’ve got an important stored procedure that you think needs index help– but it runs in environment with lots of other queries. How do you focus in and discover exactly what indexes need tuning for that procedure?

The best way to tune indexes in a stored procedure

The best way is to run the stored procedure yourself to generate and save an “actual” execution plan, which contains the estimates SQL Server used when it generated the plan as well as actual rowcounts, actual memory granted, etc. It will also contain a green tooltip with a “missing index request” if SQL Server thinks an index would help.

Tips on how to tune procedures with actual execution plans:

  1. If you execute the stored procedure from a free tool like SQL Sentry Plan Explorer, you can even save off the plans (which will have green missing index requests where SQL Server thinks they would help), along with runtime stats about CPU usage, reads, and duration.
  2. Runtime stats are really important and can save you tuning time. When looking at execution plans, remember that all references to “costs” are estimates. A query could have a high estimated cost, but only take 2 milliseconds of duration. Usually that’s not worth your time to tune!
  3. Most procedures are parameterized. You want to run the procedure with real world values for those parameters. You also want to test what happens when you run it with a different set of parameters and re-use the execution plan generated by the first set.
  4. Always look at those missing index requests as suggestions. You may be able to improve upon them, and they may not always show up when an index change is needed! Look for expensive operations in queries with a long duration and ask yourself if an index might help.

The biggest problem is that you can’t always run a procedure against production. Sometimes the procedure modifies data, sometimes it just impacts performance too much. You can work around this by restoring a recent backup to a different environment where it’s safe to test.

If you can’t get actual execution plans with runtime stats, there are a couple of alternatives

Nearly as good: enable SQL Server 2016’s Query Store feature. It will collect runtime stats (cpu usage, duration, reads, writes) and execution plans from production. They aren’t actual plans (no actual rowcounts, etc), but it’s still a lot of useful information. However, you have to be on SQL Server 2016, and Query Store is per-database — so you need to enable it in every database the procedure touches.

One way that sometimes works is to get the cached execution plan from memory in production along with the query execution stats (cpu usage, duration, reads, writes). You can do this using a free script like sp_BlitzCache from Brent Ozar Unlimited, or Glenn Berry’s free scripts from SQL Skills. However, the plan isn’t always in memory when you look, and these aren’t actual plans, either.

Methods I would avoid

I wouldn’t spend much time looking at the “Missing Index DMVs” for this particular problem. In theory, you can record what indexes SQL Server has asked for, let things run, and then see what requests are new. But in practice, I’ve always found this information to be too incomplete and frustrating, because it doesn’t tell you how long things took, SQL Server doesn’t always ask for an index, and the requests aren’t always perfect. I find it much easier to have the execution plans and queries for reference — ideally actual plans, but at least estimated plans with runtime stats.

I would avoid running a trace to collect execution plans in production. Whether with Profiler, Server Side Trace, or Extended Events, collecting execution plans unfortunately slows down your SQL Server– so you’re impacting performance for the server, and also skewing your measurement. (Vote up this Connect bug on the issue if you like traces and wish this was better. The bug is closed, but sometimes closed bugs are reopened and fixed.)

Previous Post
Why is My Query Faster the Second Time it Runs? (Dear SQL DBA Episode 23)
Next Post
Learn to Tune Indexes in Lisbon on March 9, 2017

Related Posts

3 Comments. Leave new

  • Question/Comment — Wouldn’t you also want to check out the impact of index changes made?

    If I tweak an existing index, what else uses it ( and how was it impacted?

    If I create a new one, do existing procedures and queries think it is more useful? Do they perform better or worse now?

    Or am I overthinking it and missing your point and the scope of the post?


    • Oh, that’s a great point if you’re changing the order of key columns in any way.

      One of the things I LOVE about Query Store is that it gives us a way to better check on what other queries are using an index that isn’t cleared out by memory pressure or SQL Server restarts. Checking the plan cache is definitely good, but there’s always that big dose of uncertainty as to what it might be missing.

  • Yes, and also consider looking at the index usage stats along withe query plan (not on 2016 so have not utilized query store but am very much looking forward to it) to get an idea of what is being used…Then you just need to determine what is using them if not in the plan cache (and not 2016 I assume)



Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.