Should You Rebuild or Reorganize Indexes on Large Tables? (Dear SQL DBA Episode 19)

on October 13, 2016

The bigger your indexes are, the harder your index maintenance falls. Is it better to rebuild big indexes? Or should you reorganize?

If you’re short on time, scroll down: an article with all the content is below the 23 minute video.

Here’s this week’s question:

Dear SQL DBA,

Any advice for rebuilding indexes on very large tables? We’ve got a large table with over 2 billion rows. Index rebuilds consume over 300GB of transaction log space even though we’re backing up the log every 3 minutes.

To solve this problem, should we reorganize instead of rebuild?

Any issues with never rebuilding indexes – just reorganizing them?

We are using SQL Server 2014 Enterprise Edition.

Rebuilding big indexes can cause big headaches

Index rebuilds have a log lot going for them, but they also have some drawbacks.

ALTER INDEX REBUILD

Pros Cons
Can use parallelism - which can reduce the runtime (Enterprise Edition) Single threaded only in Standard Edition
ONLINE option (Enterprise Edition) Note: this requires an exclusive lock on the table at the end of the operation. When performed offline, the entire table is unavailable for the duration of the operation
Defragments all levels of the index - leaf and intermediate pages Rollbacks can be long and excruciating when rebuild fails or is cancelled
Also updates statistics on the table with a FULLSCAN of all the data  Causes queries to generate new execution plans when they run again
Reapplies options such as fillfactor and compression, and gives you the option to change those settings.
Minimal logging is available under some recovery models

Rebuilding giant indexes with Availability Groups or Database Mirroring is particularly tricky

With very large indexes, rebuilds take longer, generate more log, impact performance more while they’re running.

If you’re using high availability features like Availability Groups or database mirroring that stream the log between replicas, generating a lot of log data very quickly can create problems.

Your replica/ mirrors may fall behind. Depending on the latency between replicas, the size of the indexes rebuilt, and other operations in the database, they may be out of your Recovery Point Objective / Recovery Time objective for a long time.

In this situation, it’s particularly attractive to drip changes into the log more slowly. One of the ways to do this is to use REORGANIZE for those large indexes.

You’re right to consider reorganizing!

Of course, things aren’t perfect when it comes to REORGANIZE. It has some downsides as well.

ALTER INDEX REORGANIZE

Pros Cons
 Tends to “trickle” changes into the log more slowly (a pro only for specific situations) Single threaded only - regardless of edition – so it’s slower.
Fully online in every edition
If cancelled or killed, it just stops where it is – no giant rollback. Defragments only the leaf level of the index
Does not cause plan recompilation Does not update statistics - you have to manage that yourself
Honors/reapplies existing settings such as fillfactor and compression Does not allow you to change settings such as fillfactor and compression

When it comes to giant indexes, the biggest downsides are related to the whole reason you’d use it in the first place: it’s slow!

  • That single thread goes slow and steady through the leaf of your index, and it can easily take up your whole maintenance window
  • While you can cancel the command and it stops without a giant rollback, there’s no built in way to programmatically run it against an index for a specific time. It’s cancel or kill.

You may need to exclude the largest indexes from automated rebuilds and handle them manually

In cases where REBUILDs on large indexes can’t happen because they put replicas too far behind or cause too heavy of a performance impact, typically these large indexes are manually REORGANIZED in spurts while a DBA team member is around to babysit.

It’s a bit like watching paint dry.

Going through this experience of babysitting large index reorganizes generally causes DBAs to:

  • Start researching ways to speed up communication between their replicas and storage under their transaction logs.
  • Become more moderate on the amount of fragmentation they allow indexes. It may be fine to reorganize small indexes that are 5% fragmented or more and rebuild when they’re 30% or more, but these thresholds just don’t work for giant indexes when you have a small maintenance window or sensitive replicas/mirrors.

Minimal logging is available for REBUILD - but you may not be able to use the recovery model required

It’s worth talking about the “minimal logging” pro for rebuilds. If you don’t need point in time recovery and can lose any changes that occur between full backups and are in the SIMPLE recovery model, you benefit from this all the time.

If you need to limit data loss and are in the FULL recovery model, you may still have to switch to SIMPLE during some outages for large data changes. And during those outages, you might want to fit in some offline index rebuilds if you have the time, because they can be faster.

In the SIMPLE and BULK LOGGED recovery models, ALTER INDEX REBUILD doesn’t have to log every row. There’s a few caveats:

  • This doesn’t help you if you’re using an AG or Mirroring– FULL recovery model required
  • Under BULK LOGGED, you’ll still have large transaction log backups after a large rebuild, they just pull from the data files

The amount logged under BULK LOGGED and SIMPLE varies depending on whether you’re doing ONLINE or OFFLINE rebuilds. Kalen Delaney has test code and sample numbers in her post, “What Gets Logged for Index Rebuild Operations?”

Any issues with never rebuilding indexes – just reorganizing them?

alterindexrainbow

Rebuilding an index fixes up the whole structure. Picture an index as a triangle, with a root page, intermediate pages, and leaf pages. The rebuild command will defragment all those intermediate pages.

Reorganizing only works on the leaf pages. It doesn’t work on the intermediate pages between the root and the leaf.

Larger indexes have more intermediate levels and pages. These pages can get empty space on them and become out of order over time as well.

Fragmentation in intermediate pages isn’t usually a problem at all – one of the more common confusions of new DBAs looking at fragmentation is that they see that intermediate pages in small indexes are fragmented immediately after a rebuild, no matter what they do. That’s actually normal.

If you never ever rebuild a large index, you could potentially lose some performance due to massive fragmentation in the intermediate level of the indexes– but I think it’d likely be more a difference of microseconds than milliseconds per query.

If it were me, I’d generally consider rebuilding problem “giant” index during outage/maintenance windows every six months or a year. But it’s my inner paranoid, tinfoil hat wearing DBA saying that.

But hey, in some systems microseconds do matter! In that case, breaking up these giant indexes into partitions is probably desirable (more on that soon).

Tips

Don’t REBUILD ALL on large tables

SQL Server allows you to run ALTER INDEX ALL REBUILD, like this:

ALTER INDEX ALL ON Sales.Orders REBUILD;
GO

The syntax is convenient, but as Books Online explains, “When ALL is specified, all indexes on the table are dropped and rebuilt in a single transaction.”

The bigger the transaction, the worse the rollback can be. You definitely want to handle the indexes individually.

Consider waiting between operations

This is a little counter-intuitive when you have a short maintenance window, but it can ease some pains.

Adding a “waiting”  period between index commands can reduce impact on the instance and Availability Group replicas or Database Mirrors. Many popular index maintenance scripts have an option for this.

Long term, table partitioning can break up large chunks of data

I’m not a fan of partitioning for everything – I did a video earlier this year on Why Table Partitioning Doesn’t Speed Up Query Performance.

But table partitioning can be excellent for data management. Everything we’ve talked about so far has had downsides and hasn’t really provided a deep solution for the problem– partitioning isn’t easy, but long term it can really help with these problems.

Two things to know:

  1. Table partitioning is an Enterprise feature
  2. Online partition level rebuilds were added in SQL Server 2014. Prior to that you could rebuild entire partitioned indexes online, but partition level rebuilds were offline.

By breaking up the table into smaller chunks, partitioning makes it much easier to take advantage of all the strengths of rebuilds without so many downsides. It’s worth investigating if this might be an option (but be careful about query performance).