Niko Neugebauer’s post I reference near the end of this video about a bug fix for partitioned columnstore in SQL Server 2016 is here.
Tomas has a question: how about partitions on tables with columnstore index. That is a fantastic question, Tomas, and it is something we are going to get to in this section– but only in a few slides. We are going to get there, and it is actually a great point about columnstore indexing.
Where does table partitioning shine?
I do think this is an awesome feature, and I think it has a huge amount to offer us.
Loading lots of data
One of the problems we can have is: I’ve got this giant table and I need to load a lot of data in it, but I don’t get a lot of downtime. In fact maybe I don’t get any down time really, and I need to put this data in a table while people are querying it. Inserting lots of data, or even a little bit of data sometimes, into a live table: we could have blocking between us and the people who are reading the table. Now, there are other ways to work around this. Like snapshot isolation, there are different things we can do. But, we want to make this just easy and simple.
If we partition the table, we have the ability to create a staging table separate from the partitioning table. We do need to create it on this same filegroup where we want the data to live in the partition. So, when we create the partitioning function and then apply a partitioning scheme to it, we are going to specify, okay here’s where this partition lives, right. So, each partition has a home. We need to create this staging table on this same filegroup as where the partition this is going to land in is. We put all our data, we insert the data into this staging table We create indexes on it that match our partitioned table. The users haven’t seen this at all, it’s just an individual table at this point.
We also create constraints on this staging table, and the constraints on it are on things like the partitioning key so that SQL Server, these are checked constraints so that it knows exactly what range of data is in this table. And, so when we switch, we can do a simple meta data operation that says, hey take this chunk of data in this staging table and make it part of that partition table in the proper partition which is on the same filegroup.
And those constraints, it’ll look at the constraints on the table, and it’ll look at the definition of the partition function, and it’ll be like, oh yeah, that’s the right data for this partition. And, it’s able to do a meta data switch and suddenly all of this data is part of the partition’s table. This can be extremely powerful in some situations. It is true that to do that change and switch the data in it does have to have an exclusive lock on the table while it does that meta data change. It says, nope, this is part of the partition table now.
So, there can be blocking when we’re doing that switch. We got some new options in 2012 where we can say, okay weighted a certain low priority so that I don’t cause a big blocking change. This can be huge for, I want to load a whole lot of data and just make it suddenly appear in the table.
Similarly, I can have problems deleting a lot of data from a live table
Boy, can I have a lot of problems. Let’s say I’ve got a large table, and I need to delete a month’s worth of data or a large amount of data. Doing this against a live table, I can use just a lot of resources, right? I’m going to have to log all those deletes, I’m going to have to figure out how to avoid blocking. It can be a real pain point. But, as long as the amount of data I want to delete is an entire partition worth, with a partition table, I can say, take this partition, switch it out to an empty table on the same filegroup.
And, suddenly, I do have to get that high level lock where it says, hey, I’m modifying, all the data in this partition, it’s moved. I can also move that data to another partition table, right? I can just move that data, can be very powerful. In SQL Server 2016 and higher I even have the ability to just truncate the data on a partition.
Very, very powerful. Very, very cool.
You can do multiple partitions as long as you don’t have any of those non aligned indexes on the table. Very, very cool.
So, that is huge for managing data, right? These are not just like, I’m running a query, this is: I want to load lots of data, I want to delete lots of data.
Partitioning can reduce your index maintenance
Similarly, maintenance, I have a table with billions of rows that is many, many gigabytes of data. The bigger an index gets, the harder maintenance gets. Rebuilding large indexes either clustered or non clustered can take lots of memory grant, lots of disc access, lots of CPU. So giant indexes make longer and longer maintenance which when people want to use our system increasingly 24/7, we’re like, how am I going to do this?
Well, partitioning a table can be amazing. In certain situations, partitioning a table may mean that partitions never get fragmented. If I have the type of table where it’s a fact table for example, and I have new data coming in and I’m partitioning on a date range, these partitions by date, my older dates, that data is never changing. I just have new data coming in for my fact table.
So, suddenly, I now have a situation where maybe I don’t even have to rebuild these indexes because they’re not getting fragmented ’cause they’re all on a range of dates and the data just isn’t changing. In some situations, maybe I have an active partition and I only need to rebuild data in that partition, this can absolutely be fantastic and can simplify my life.
Even if I have data changing in a lot of partitions, we have online partition level rebuilt in recent versions of SQL Server. We haven’t had this forever, but I can at least now break up the operation into a smaller chunk that isn’t: rebuild this huge, giant thing it wants.
But, I can iterate through the partitions and just deal with them on a smaller level doing smaller units of work that are less disruptive to our customers, potentially to our availability group secondaries, right? We’re putting smaller amounts of index rebuild modifications into our transaction log at once.
It can be very, very powerful for maintenance and for managing these large chunks of data.
This also applies to corruption checking, but I have to be a little bit clever
I can do check table in SQL Server. Let’s say I have a huge database, many terabytes of data, and CHECKDB just takes forever. I can write my own custom scripts to do check table and then there’s a few other things CHECKDB does besides check tables. So, I can do a custom script to do that, but in some databases even running check table is like just too much work to do at once. Because even if it’s partitioned, this partition table may be huge.
We do not have the ability to check an individual partition. Some folks have asked for this, but it’s never happened. This is one of those things where I’m always like, you know, we do constantly get improvements in SQL Server. I’m always like, oh, did we get that, okay I don’t think we’ve gotten that yet.
We do have a work around. Even though we don’t have check partition, we do have CHECKFILEGROUP.
I have the option when I create a partition table. I can use one filegroup for all of my partitions if I want. I can use one filegroup per partition. I get to specify, where do I want to put each and every partition. I could say, put them all in one place or I could do whatever I want. So, I don’t always recommend one filegroup per partition, ’cause sometimes just managing all those filegroups is a big ‘ol pain. But, however I want to lay it out, often, with giant tables, I don’t want to use one big filegroup for them even if I have a fancy SAN that does storage sharing because I may want to use DBCC CHECKFILEGROUP.
Breaking them up into certain digestible sized chunks for CHECKDB can give me the ability to not have to check the whole table at once. I can break up my corruption checking so that in my ever shrinking maintenance window I am able to check for corruption, and that can be really useful.
Table partitioning in these giant databases can also help with our backups
If I have a database that’s three terabytes, four terabytes, five terabytes, oh, we’re up to 10 terabytes. Backing up all that data can take longer and longer and longer depending on what technologies we’re using, how we want to store our backup, how many copies of our backup we want to use, and all of that. Making these individual backups can be huge.
If we are able to, depending on the pattern of our tables, if we do have a partition table where we’re partitioning on a key and large amounts of the table isn’t changing, we have the ability, if we’re spreading this out on different filegroups to make some of those filegroups read only. Then we can move to filegroup level backup. And, for those read only filegroups, we know data isn’t changing there.
They’re read only, so we can guarantee it. So, I can back up those read only filegroups, and I have to be careful to keep my backup. I have to make sure, I’m going to want to test it. I’m going to want to make sure it doesn’t fall prey to disc corruption. I’m going to need to take some care, right, because even a good back up could have the disc corrupt under it.
But, I don’t have to back up those read only filegroups constantly, and I can make backups much shorter with that, too. So, managing data and living with these databases and getting away from the problems can be made much easier with partitioning. It can be very, very cool to make your life better.
And, getting back to columnstore (here comes the exception!)
We have cool, new indexes in SQL Server. So far, I’ve been talking about rowstore indexes in SQL Server, those b-tree indexes. What if we’re using new columner technology? And this can be fantastic especially for queries that do aggregates, that want to scan lots of data. Well, one of the things about columnstore is columnstore just doesn’t support seeks. It’s columnar data, and it’s not organized in a nice little b-tree where we do a seek. It’s very, very different. Columnstore does have something called rowgroup elimination.
Where it can look at metadata on an individual rowgroup and figure out, for certain data types, not all data types, but for certain data types. If you’re, hey, does this rowgroup contain anything relevant to my query? And it’s able to skip it, but still, if we have a really large columnstore index, that rowgroup elimination, it can speed me up but it’s not the same as being able to do a seek.
If I partition my columnstore index, and with columnstore, even just having a columnstore you need to have minimum amount of data, right? Your columnstore isn’t going to perform very well if you just have a few little rows. We’re talkin’ about big tables, here.
But, if I partition my columnstore index, I can get partition elimination on my partitioning key. Now, that’s not the same as a seek, but it is pretty powerful because SQL Server can actually combine partition elimination with rowgroup elimination. It can figure out, oh, I don’t even need to look at those other rowgroups to see if they’re pertinent to me ’cause the paritition elimination tells me all the data I want is over here. And then within that partition, if it has multiple rowgroups I can do rowgroup elimination on those. It is awesome.
I have the ability to super compress some data in my columnstore index if I want, but there is a trade off for query performance
And, if I have partitioned my clustered columnstore index, I can now say, okay on these older partitions that I just don’t use very often, I want to super compress those to save space. ‘Cause it’s fine, when I query something, it’s fine if it’s a little slower. We don’t have to do that very often. Very cool option to have. Now, you may be looking at the slide and be like, um, wait a second here. You said table partitioning doesn’t speed up query performance, but actually in this case of columnstore indexes, this does sound like getting partition elimination could speed up query performance here, because in this case I don’t have the option to do a seek otherwise. And, I’ve sort of super charged my rowgroup elimination by adding partition elimination on top of it.
And, yes, I do admit, in this one case, I’m like, yeah, okay, this does speed up queries.
Generally table partitioning is not something I would look at to be like, oh, my queries are slow I need to speed up my queries. But, in this case, with big columnstore indexes, yes, this can speed up your queries.
Honestly, there are other reasons you would want to do it for these giant columnstore indexes as well. Just in terms of managing the columnstore indexes and being able to use switching often really makes sense. I often do not want to do live inserts into my columnstore index even though I can in many versions of SQL Server. It’s much nicer if I can load up data for my partition columnstore index. And then switch it in that case too.
But yes, I do think in this case there is a case to be made, that in this situation table partitioning does speed up query performance
There’s an exception to many rules. And, I think that this is the the exception in this case. I don’t know if it proves the rule, but I think it is kind of an exception.
Now, with complexity comes gotchas. The more complex we make our SQL Servers, the more we use these cool features, the more likely we are to hit a little edge case. Where something that should be fast, oh, we hit an issue and it isn’t fast anymore. Because, you know, the faster we’re driving in the race car and the fancier we make the race cars, the harder it gets to do things.
There is an example in this case where before 2016 some partitions were being scanned that shouldn’t be scanned in a partitioned columnstore index.
Niko, if you’re into columnstore and you haven’t been reading Niko’s blog. Nikoport is his blog. You should check it out.
I have a short link here. This is part 103 of his columnstore post. He has fantastic posts on columnstore, including talking about things like partitioning. And, this is just an example of one of those things where he found: oh, hey, this thing is slower in 2014 than it should be. It’s fixed in 2016, but it’s just an example of: you may come across these.
The more complex we make things, you may come across these little things where, like, oh, this isn’t quite as good as it could be. And, hopefully those get fixed for us. Sometimes we have to do a work around ourselves, and rewrite our code.