Should You Rebuild or Reorganize Indexes on Large Tables? (Dear SQL DBA Episode 19)

The bigger your indexes are, the harder your index maintenance falls. Is it better to rebuild big indexes? Or should you reorganize?

If you’re short on time, scroll down: an article with all the content is below the 23 minute video.

Here’s this week’s question:


Any advice for rebuilding indexes on very large tables? We’ve got a large table with over 2 billion rows. Index rebuilds consume over 300GB of transaction log space even though we’re backing up the log every 3 minutes.

To solve this problem, should we reorganize instead of rebuild?

Any issues with never rebuilding indexes – just reorganizing them?

We are using SQL Server 2014 Enterprise Edition.


Increasingly Fragmented

Rebuilding big indexes can cause big headaches

Index rebuilds have a log lot going for them, but they also have some drawbacks.

Pros Cons
Can use parallelism – which can reduce the runtime (Enterprise Edition) Single threaded only in Standard Edition
ONLINE option (Enterprise Edition)
Note: this requires an exclusive lock on the table at the end of the operation.
When performed offline, the entire table is unavailable for the duration of the operation
Defragments all levels of the index – leaf and intermediate pages Rollbacks can be long and excruciating when rebuild fails or is cancelled
Also updates statistics on the table with a FULLSCAN of all the data  Causes queries to generate new execution plans when they run again
Reapplies options such as fillfactor and compression, and gives you the option to change those settings.
Minimal logging is available under some recovery models

Rebuilding giant indexes with Availability Groups or Database Mirroring is particularly tricky

With very large indexes, rebuilds take longer, generate more log, impact performance more while they’re running.

If you’re using high availability features like Availability Groups or database mirroring that stream the log between replicas, generating a lot of log data very quickly can create problems.

Your replica/ mirrors may fall behind. Depending on the latency between replicas, the size of the indexes rebuilt, and other operations in the database, they may be out of your Recovery Point Objective / Recovery Time objective for a long time.

In this situation, it’s particularly attractive to drip changes into the log more slowly. One of the ways to do this is to use REORGANIZE for those large indexes.

You’re right to consider reorganizing!

Of course, things aren’t perfect when it comes to REORGANIZE. It has some downsides as well.

Pros Cons
 Tends to “trickle” changes into the log more slowly (a pro only for specific situations) Single threaded only – regardless of edition — so it’s slower.
Fully online in every edition
If cancelled or killed, it just stops where it is — no giant rollback. Defragments only the leaf level of the index
Does not cause plan recompilation Does not update statistics – you have to manage that yourself
Honors/reapplies existing settings such as fillfactor and compression Does not allow you to change settings such as fillfactor and compression

When it comes to giant indexes, the biggest downsides are related to the whole reason you’d use it in the first place: it’s slow!

  • That single thread goes slow and steady through the leaf of your index, and it can easily take up your whole maintenance window
  • While you can cancel the command and it stops without a giant rollback, there’s no built in way to programmatically run it against an index for a specific time. It’s cancel or kill.

You may need to exclude the largest indexes from automated rebuilds and handle them manually

In cases where REBUILDs on large indexes can’t happen because they put replicas too far behind or cause too heavy of a performance impact, typically these large indexes are manually REORGANIZED in spurts while a DBA team member is around to babysit.

It’s a bit like watching paint dry.

Going through this experience of babysitting large index reorganizes generally causes DBAs to:

  • Start researching ways to speed up communication between their replicas and storage under their transaction logs.
  • Become more moderate on the amount of fragmentation they allow indexes. It may be fine to reorganize small indexes that are 5% fragmented or more and rebuild when they’re 30% or more, but these thresholds just don’t work for giant indexes when you have a small maintenance window or sensitive replicas/mirrors.

Minimal logging is available for REBUILD – but you may not be able to use the recovery model required

It’s worth talking about the “minimal logging” pro for rebuilds. If you don’t need point in time recovery and can lose any changes that occur between full backups and are in the SIMPLE recovery model, you benefit from this all the time.

If you need to limit data loss and are in the FULL recovery model, you may still have to switch to SIMPLE during some outages for large data changes. And during those outages, you might want to fit in some offline index rebuilds if you have the time, because they can be faster.

In the SIMPLE and BULK LOGGED recovery models, ALTER INDEX REBUILD doesn’t have to log every row. There’s a few caveats:

  • This doesn’t help you if you’re using an AG or Mirroring– FULL recovery model required
  • Under BULK LOGGED, you’ll still have large transaction log backups after a large rebuild, they just pull from the data files

The amount logged under BULK LOGGED and SIMPLE varies depending on whether you’re doing ONLINE or OFFLINE rebuilds. Kalen Delaney has test code and sample numbers in her post, “What Gets Logged for Index Rebuild Operations?”

Any issues with never rebuilding indexes – just reorganizing them?

alterindexrainbowRebuilding an index fixes up the whole structure. Picture an index as a triangle, with a root page, intermediate pages, and leaf pages. The rebuild command will defragment all those intermediate pages.

Reorganizing only works on the leaf pages. It doesn’t work on the intermediate pages between the root and the leaf.

Larger indexes have more intermediate levels and pages. These pages can get empty space on them and become out of order over time as well.

Fragmentation in intermediate pages isn’t usually a problem at all — one of the more common confusions of new DBAs looking at fragmentation is that they see that intermediate pages in small indexes are fragmented immediately after a rebuild, no matter what they do. That’s actually normal.

If you never ever rebuild a large index, you could potentially lose some performance due to massive fragmentation in the intermediate level of the indexes– but I think it’d likely be more a difference of microseconds than milliseconds per query.

If it were me, I’d generally consider rebuilding problem “giant” index during outage/maintenance windows every six months or a year. But it’s my inner paranoid, tinfoil hat wearing DBA saying that.

But hey, in some systems microseconds do matter! In that case, breaking up these giant indexes into partitions is probably desirable (more on that soon).

Don’t REBUILD ALL on large tables

SQL Server allows you to run ALTER INDEX ALL REBUILD, like this:


The syntax is convenient, but as Books Online explains, “When ALL is specified, all indexes on the table are dropped and rebuilt in a single transaction.”

The bigger the transaction, the worse the rollback can be. You definitely want to handle the indexes individually.

Consider waiting between operations

This is a little counter-intuitive when you have a short maintenance window, but it can ease some pains.

Adding a “waiting”  period between index commands can reduce impact on the instance and Availability Group replicas or Database Mirrors. Many popular index maintenance scripts have an option for this.

Long term, table partitioning can break up large chunks of data

I’m not a fan of partitioning for everything — I did a video earlier this year on Why Table Partitioning Doesn’t Speed Up Query Performance.

But table partitioning can be excellent for data management. Everything we’ve talked about so far has had downsides and hasn’t really provided a deep solution for the problem– partitioning isn’t easy, but long term it can really help with these problems.

Two things to know:

  1. Table partitioning is an Enterprise feature
  2. Online partition level rebuilds were added in SQL Server 2014. Prior to that you could rebuild entire partitioned indexes online, but partition level rebuilds were offline.

By breaking up the table into smaller chunks, partitioning makes it much easier to take advantage of all the strengths of rebuilds without so many downsides. It’s worth investigating if this might be an option (but be careful about query performance).

Ask a question!

Head to, I’m always looking for good topics.

18 Comments. Leave new

  • That was an awesome video. If i see no performance issues, is it a best practice to rebuild indexes daily? I am currently using Ola Halengren’s script.

  • Great post! A topic dear to my heart as I recall some previously-frequent on-call wakeups due to drives filling up during index maintenance…

    One thing to consider with rebuilds on large indexes is data file space. While the reorg will just do its work on a sliding window of the original object, if you’re rebuilding a 300GB index you’ll need the whole original and the new structure coexisting at least temporarily while it is built, so if you’re tight on drive space, it can be difficult to do a non-partitioned rebuild.

    Another thing I’ve noticed, if you use the old recommendation from Microsoft that Ola Hallengren implements by default…the >5%, reorg, >30%, rebuild…you may find that many of your large indexes never get rebuilt anyway. *Most* really huge tables tend to be less volatile than the smaller ones and if you’re reorging every night as it trips over 5% you’re not likely to see it jump from under 5% to 30% very often. Still, I have a 100GB table in a vendor database that fragments to 99%, and 70% page free space, almost instantly…thank you, GUID clustering key. We just let it fragment at this point, not worth it.

    Also, clustered columnstores use the same syntax (ALTER INDEX … REBUILD / REORGANIZE) but they behave in completely different ways. REORGANIZE on a clustered columnstore will never deal with what we kind of understand to be the equivalent of fragmentation there…that is, the compressed rowgroups are essentially read-only and changes to the data are reflected by marking a row deleted in the deleted bitmap, and then if an update happens, the row is marked deleted and then a new row inserted into the delta store. So if you have a lot of updates or deletions on a columnstore, the only way to get your space back is to REBUILD, which is going to discard all the deleted rows at that point. REORGANIZE just launches the tuple mover which is how the delta store rows get compressed and put into a new rowgroup…but that happens every 5 minutes as I recall, so its not particularly useful to me.

    Fun stuff…as long as it doesn’t wake you up at 2am!

  • My all-time favor SQL mentor. 🙂 Thanks for the wonderful video and summing all relevant details.

  • Another option to consider is instead of rebuilding or reorganizing indexes… can you get away with just updating statistics, either with a full scan or sample size? You might still see performance gains from simply updating statistics instead of doing the painful full REBUILD or less-painful-but-still-has-some-pain REORGANIZE on the large table.

    Also, was this a typo or intentional pun (I put [brackets] around the word in question)?
    “Index rebuilds have a [log] going for them, but they also have some drawbacks”
    I’m assuming you meant “lot” going for them. Otherwise, great pun!

    • I forgot to mention that doing statistics updates (in lieu of rebuild/reorg) would likely cost you less I/O and log writes than a rebuild / reorg would cost.

      • I would say the benefits of this approach depend upon your objectives…just updating stats will maintain your statistics, but it won’t deal with index fragmentation at all. If that’s something you’re willing to accept it might be fine, and it will indeed use a lot less I/O, particularly lean on writes. However, you don’t defragment indexes in order to update statistics, that just happens to be an additional benefit of a rebuild. If your stats are stale but your index is not heavily fragmented, yes, a stats update is much better than a rebuild…

        In many cases you can accept leaving index fragmentation in place as the “least bad” of the options, but bear in mind the downsides…if you have a heavily fragmented index due to page splits, you could have (as I have on one table) 70% of every page being blank white space. So this table is three times as large as it would have been if defragmented…it requires three times as much I/O to scan, three times as much space to store in the buffer cache, etc. So you can pay a steep price in storage, memory, and I/O if you let fragmentation run completely wild.

        Just my 2c!

  • “Index rebuilds have a log going for them” …

    Noticed a typo… It seems to work though and is kind of humorous.

  • Great video Kendra.

    I have a very similar situation and your advice was really helpful.

  • I dont see the video of your posts? only the text – any suggestions? for instance, right under: If you’re short on time, scroll down: an article with all the content is below the 23 minute video.

    is blank space

  • Thanks Kendra. This concisely sums up and reaffirms so many of the woes which have led us to exclusively REORG our large indexes (weekends only)! I have a burning question, however: Thinking about an index reorganize that doesn’t complete, and that defragments only about 50% of the fragmented pages before we kill it at the end of our maintenance window ….. would this leave one end of the table at, say, 1-5% fragmented and the other end of the table still heavily fragmented? Would it be better to fully defragment one large index, picking up from last weekend’s unfinished reorg, before moving onto this weekend’s newly identified, heavily fragmented indexes again? Or is the fragmentation in a partially reorganized, 35% fragmented index just as random as an index that has become 35% fragmented through page splits over time?! Many thanks in advance.

    • Hi Jimmy,

      I am doing a free webcast on Nov 16 on defraying indexes ( One of the things I wanted to cover was why I’m excited about the new resumable index rebuild operation in 2017, and compare restarting it with restarting an alter index reorganize— which sounds like it matches up with your question! I’m going to put together some demos which hopefully will shed some light on your question. So stay tuned 🙂


  • […] Do we rebuild or reorganize? That’s not something I’ll dig into in this post. In a nutshell, reorganizing an index simply redistributes data among the pages already allocated to the index while rebulding is a more drastic operation that actually rebuilds the index. Which makes more sense depends on many factors including SQL Server edition, whether or not index can be offline, whether or not we need to be able to interrupt the command, etc etc. A great post by Kendra Little on some of these issues can be found here. […]

  • Kendra,

    Thank you for this post. Good information. Clarified some doubts I had before starting the rebuild and reorg of Terabits tables. I am wondering if there is enough time in the maintenance window, instead of rebuilding the index and causing the log growth, would it be better to drop the existing one and build the index?

  • Kendra thank you for your article.
    What do you thing about locking STATS by index reorganize. In my memory index reorganize lock exclusive metadata – statistics and queries who’s plan started to compile in this lock is waiting to grant shared lock to object Stats. whoh do you troubleshoot this moment?


Share a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: