November 9, 2015 at 8:35 am
Hello
I have a very large table (600,000,000 rows) on which I am trying to rebuild each index.
Every time I try the log file runs out of space. Log file drive is 500GB but it needs to be expanded.
Hou to a figure out how much space i need for the log to complete the operation?
Table data:
rows: 600,000,000
reserved: 1678894800 KB
data: 1583032912 KB
index_size: 95656232 KB
unused space: 205656 KB
Thanks
November 9, 2015 at 2:02 pm
If you need to rebuild all indexes on a table, why don't you create new table and port data to new table in batches to limit log file growth.
1. rename existing table and named constraints
2. create table with new schema
3. bulk insert from old to new table. if required, break on the id and do it in batches of 100,000* records using a while loop (* - a number that suits the hardware & environment)
4. drop old table - reclaim space here
5. create indexes with proper fillfactors
6. create constraints (after indexes)
November 9, 2015 at 3:01 pm
jdbrown - That is a very large table to try to rebuild indexes on. The index size you provided shows that all indexes are about 91 GB so if your transaction log is larger than 100 GB it should fit. You would need to make sure that just before you do the rebuild that the tlog is backed up and empty. You could also rebuild a single index then take a tlog backup and repeat until all the indexes have had maintenance run on them. You should also do this when no one is using the database to make it go faster. With a table that large, you should think about trying to partition data to make maintenance and queries to the table faster and easier to manage. Good luck!
November 9, 2015 at 3:40 pm
Keep in mind sp_spaceused doesn't report the size of a clustered index as part of "index_size". If the table in question has (is) a clustered index, and you're trying to rebuild that, that's a 1.5 TB clustered index being rebuilt.
Long term, a single table that size is screaming for purging/archiving/splitting into multiple tables/partitioning; of course, what is most appropriate depends on the details.
For the short-term, if indeed there is a gigantic clustered index being rebuilt, then I'd first consider whether the index really needs to be rebuilt or not; perhaps a reorganize would be sufficient, and since it wouldn't be a single gigantic transaction, you could better manage the log growth.
Cheers!
November 9, 2015 at 5:09 pm
Jacob Wilkins (11/9/2015)
Keep in mind sp_spaceused doesn't report the size of a clustered index as part of "index_size". If the table in question has (is) a clustered index, and you're trying to rebuild that, that's a 1.5 TB clustered index being rebuilt.Long term, a single table that size is screaming for purging/archiving/splitting into multiple tables/partitioning; of course, what is most appropriate depends on the details.
For the short-term, if indeed there is a gigantic clustered index being rebuilt, then I'd first consider whether the index really needs to be rebuilt or not; perhaps a reorganize would be sufficient, and since it wouldn't be a single gigantic transaction, you could better manage the log growth.
Cheers!
Agreed but be careful about REORGANIZE. It's fully logged no matter which recovery model is active. If the clustered index is hairy with fragmentation, then it might not help either.
--Jeff Moden
Change is inevitable... Change for the better is not.
November 9, 2015 at 5:13 pm
have2much (11/9/2015)
If you need to rebuild all indexes on a table, why don't you create new table and port data to new table in batches to limit log file growth.1. rename existing table and named constraints
2. create table with new schema
3. bulk insert from old to new table. if required, break on the id and do it in batches of 100,000* records using a while loop (* - a number that suits the hardware & environment)
4. drop old table - reclaim space here
5. create indexes with proper fillfactors
6. create constraints (after indexes)
Just creating the clustered index on the table is going to require about 3.2 Terabytes because the HEAP that creates will be preserved until the new clustered index is formed plus about 20% overhead.
--Jeff Moden
Change is inevitable... Change for the better is not.
November 9, 2015 at 5:15 pm
jdbrown239 (11/9/2015)
HelloI have a very large table (600,000,000 rows) on which I am trying to rebuild each index.
Every time I try the log file runs out of space. Log file drive is 500GB but it needs to be expanded.
Hou to a figure out how much space i need for the log to complete the operation?
Table data:
rows: 600,000,000
reserved: 1678894800 KB
data: 1583032912 KB
index_size: 95656232 KB
unused space: 205656 KB
Thanks
As suggested, first make sure the indexes actually need to be rebuilt.
I'm sure that you're feeling the pain of all this. This table needs to be split up. Which version of SQL Server are you using and which Edition is it? Also, do you have a spare 2TB disk you could bring into play because, one way or another, you're going to need it for a while.
--Jeff Moden
Change is inevitable... Change for the better is not.
November 9, 2015 at 6:22 pm
Jeff Moden (11/9/2015)
Jacob Wilkins (11/9/2015)
Keep in mind sp_spaceused doesn't report the size of a clustered index as part of "index_size". If the table in question has (is) a clustered index, and you're trying to rebuild that, that's a 1.5 TB clustered index being rebuilt.Long term, a single table that size is screaming for purging/archiving/splitting into multiple tables/partitioning; of course, what is most appropriate depends on the details.
For the short-term, if indeed there is a gigantic clustered index being rebuilt, then I'd first consider whether the index really needs to be rebuilt or not; perhaps a reorganize would be sufficient, and since it wouldn't be a single gigantic transaction, you could better manage the log growth.
Cheers!
Agreed but be careful about REORGANIZE. It's fully logged no matter which recovery model is active. If the clustered index is hairy with fragmentation, then it might not help either.
Indeed. Reorganizing is certainly not guaranteed to be any better, especially if you're using the full recovery model and aren't taking frequent log backups.
It's just one option to consider and weigh the advantages/disadvantages. I perhaps should have been more cautionary 🙂
November 10, 2015 at 4:57 am
What version of SQL do you have?
If you have enterprise, you can create a partitioned table.
Partitioned in such a way that old data won't be modified if you can.
Then in future, when you need to rebuild, it should only be the latest active partition.
If you don't have space, then use the standard way of pushing data into a new table in batches and do an sp_rename and drop table.
You can also look at the compressibility of the old partitions and enable compression so as to save a couple of BlueRays worth of space, if the compression_estimate gives a good result.
Doing this, I have actually enabled the Ola Hallengren indexoptimise job without fear because at most it will rebuild 1 tiny partition of the gigantic tables and not try and rebuild, in my case, 300 million rows in one solid bite, since the old partitions never fragment.
I wish I had done this more often in the past as it greatly alleviates all the maintenance nightmares.
November 11, 2015 at 10:40 am
Thanks everyone! All are good suggestions. I decided to drop and recreate the indexes on that large table which took about an hour each. I do plane on looking a purging the older data.
Thanks!
jdbrown239
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply