Shrinking SQL Server Data Files - Best Practices, and Why It Sucks
I’ve gotten a few questions about shrinking SQL Server data files lately. What’s the best way to get shrink to run? And why might it fail in some cases?
Traditionally, every time you ask a DBA how to make shrinking suck less, they start ranting how shrinking is bad and you just shouldn’t do it. Sometimes it sounds kinda angry.
What people are trying to say is that shrinking data files is generally slow, frustrating, and painful for you.
Shrinking data files sucks, and you don’t really have many ways to make it suck less
Here’s what you’re up against, when it comes to shrinking:
- Shrinking can cause blocking while it runs. Here’s a post I wrote a while back with a demo script to reproduce the blocking.
- Shrinking may stop running and not tell you why.
- In one case, DBCC SHRINKFILE was stopping because it was hitting a deadlock and being declared the deadlock victim. That message didn’t show in the messages window for the session running SHRINKFILE though! Fun? (Not really.)
- Shrinking fragments your indexes - if the indexes are large and you run a rebuild command on them, you may just regrow the data files anyway.
- If shrink has to move around LOB data, it’s super slow. Paul Randal explains why here.
- Note that you might hit this even if the data you deleted doesn’t contain LOB data. If there are LOB pages closer to the end of the file, SQL will pick up those pages and move them towards the beginning of the file.
It’s not your fault that shrinking has all this baggage. You just want to have a smaller database. But these things are why you generally want to avoid shrinking whenever possible.
Most senior DBAs proactively keep free space in the database and regularly archive data
Planning beats shrinking, every day.
Data growth and grooming should never be dictated by disk space. The data is more important in than the disk space!
- Regularly clean up data that’s truly no longer needed.
- If you can have a nightly process that removes old data in small batches, rather than doing it once a quarter, that’s usually better for performance. It varies depending on when your database is in use, and whether your data removal process might block users.
- Sometimes it’s desirable for a business to remove data which is no longer needed quickly for legal reasons. This varies a lot by industry, though.
- Measure data growth regularly, and start to estimate how much data is likely to grow in the future
- Keep free space in the database to allow for data growth without file growths.
- This means proactively requesting more disk space and proactively growing the files. You can do it quarterly or once or twice a year – whatever works best for you, but set up reminders and don’t skip it.
Charting data growth and proactively managing the space ahead of time keeps you out of reactive “shrink mode”.
It also makes you look good to your management to be on top of this!
Sometimes you do have to shrink data files
Sure, shrinking stinks, and you avoid it when you can. But sometimes you have a one-time archive of a lot of data. You know that you aren’t going to reuse the space in the next year, and it’s enough space that shrinking will make restoring the database elsewhere easier.
In those cases:
- Consider workarounds. Sometimes it’s less work to import the remaining data into a new database during an outage, depending on how much you’re removing and how much is left.
- This could also allow you to carefully plan a nice filegroup and file layout in the new database.
- Identify low use times you can run the shrink.
- Use DBCC SHRINKFILE and set a specific, targeted size for the file you’re shrinking.
- Watch your backup jobs, and make sure they’re succeeding and not taking way longer than normal.
- Plan to leave empty space in your data files to allow for growth over the next year, and to allow index rebuilds to work if needed after the shrink is done. No need to over-shrink and then re-grow.
- Watch for blocking if you don’t have an outage window
- If you need to know how long the shrink is going to take, restore a copy of the database elsewhere and test ahead of time to get an idea – but it’s still going to be a rough estimate.
- Knowing how much LOB data it’s going to have to move is really hard from the outside. Blocking makes a different as well, as does hardware and disk speed, so it’s really tough to estimate.
- Make sure you’re running DBCC CHECKDB regularly. Because you should be doing that! (No, shrinking shouldn’t particularly cause corruption, but a little caution ain’t bad, either.)
And maybe find a show to binge-watch on Netflix while your shrink is running. Something that just takes a couple of brain cells to watch, and which you wouldn’t mind pausing if things get funky.