July 2, 2024 at 5:29 am
My online index rebuild job for a 200G table with 40mil rows took me up to 30hrs to complete, this is a once off activity done since we have an allocated downtime for server maintenance.
I would want to know, is there any way for me to do it faster? I had an idea of inserting all the data from the current table to a new table. Then truncate the old table, this will wipe clean all the data and also indexes. Once this is done, I load back all the data from that new table to the old table, I know by doing this SQL will also rebuild all the indexes while the data is being loaded.
Will this be faster than I just rebuild the single index?
July 2, 2024 at 12:01 pm
A) Testing is your buddy. Don't take anyone's word for it. Test it.
B) Probably not. In fact, that's almost the opposite of what I'd want to do. Presumably, guessing here, this is the clustered index you're rebuilding? If you create a table with the cluster, and the nonclustered indexes, then do a load to that table, not only are you constantly updating the cluster, and depending on the data, distribution, which column(s) the clustered key is on, You can, possibly end up with the clustered index being as fragmented as it gets over time naturally. PLUS, as you insert each row into the cluster, you've also got to update the nonclustered indexes... which are also splitting depending on the data load, not to mention slowing down all the inserts.
C) If it is the clustered index, three possible ideas come to me. First, leave it alone. Most of us, most of the time, can deal with index fragmentation just fine. Sure, broad scans, table scans, will hit more pages, thus slowing down. However, point lookups aren't affected at all by a fragmented index, and, for most of us, we have a lot more point lookup queries than scans, so performance is fine. OK, can't leave it alone for whatever reason, drop all the nonclustered indexes, do the index rebuild, then put the nonclustered indexes back on. Recreating them means they won't be fragmenting as you do the data load. Or, third option, if you want to move the data between tables, sure, try that, but again, no nonclustered indexes till you're done and then rebuild them.
Overall, I lean heavily on don't bother rebuilding the index. I know the old advice was just how important this is. However, modern testing (there's that word again) shows that for most of us, most of the time (weasel words are important), index fragmentation doesn't cause us much pain. In fact, for most people, the performance gains they saw from index rebuilds actually came from the cleaned up statistics, not the elimination of fragmentation.
Others may have better or different suggestions. And whatever they may be, test them too.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
July 2, 2024 at 1:45 pm
Don't do online, do offline. That should be ok if you have scheduled down time anyway.
Otherwise, hard to say without more details. As stated above, you may not really need to rebuild. And you'd want to test how best to handle the non-clus indexes. Hopefully you have another machine where you can test.
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
July 2, 2024 at 1:52 pm
It's faster to rebuild indexes offline.
You might also be able to get it to rebuild in parallel which will be faster: https://learn.microsoft.com/en-us/sql/relational-databases/indexes/configure-parallel-index-operations?view=sql-server-ver16
But I think the question needs to be asked: Why do you want to rebuild the index?
July 4, 2024 at 8:55 am
My online index rebuild job for a 200G table with 40mil rows took me up to 30hrs to complete, this is a once off activity done since we have an allocated downtime for server maintenance.
I would want to know, is there any way for me to do it faster? I had an idea of inserting all the data from the current table to a new table. Then truncate the old table, this will wipe clean all the data and also indexes. Once this is done, I load back all the data from that new table to the old table, I know by doing this SQL will also rebuild all the indexes while the data is being loaded.
Will this be faster than I just rebuild the single index?
Don't do a thing, yet...
I rebuild a table that's roughly 1/3 of a Tera Byte with 5 NCI's that take it up to 475 GB. It is PAGE Compressed and so the Rebuilds take roughly twice as long (1:00 hour without for the CI, and 1:48 with the compression). It takes an additional hour for the 5 page compressed NCI's but the total of all that is way less (< 4 hours) than 30 hours and I have more that twice the size I'm working worth. The rebuild of the CI is actually done twice (worth it to not have ANY extra unused space when I'm done) and working on making that just a one way rebuild with no extra unused space (trick with file groups).
First, can you temporarily go to the BULK LOGGED Recovery Model?
If so, please post the CREATE TABLE statement for this table including all of the indexes including the index for the clustered index so I can figure things out a bit for you. It would also be helpful to know the index sizes and page densities are.
So, I might be able to help. 😀
--Jeff Moden
Change is inevitable... Change for the better is not.
July 10, 2024 at 6:14 am
The quickest way to rebuild an index is to not do a rebuild. Zero time.
An index rebuild takes slightly disorganised data and organises it nicely with some free space to cope with future updates. If for example you allow 10% free space then just after a rebuild every page fetch gives you 90% what you want and 10% empty space that just wastes time.
As updates proceed then the free space gets used, so page fetch efficiency improves until you get page splits. Then it starts to decline. Page splits typically hit only a small portion of your data (in % terms).
If you do not do an index rebuild then eventually your data will find its own stable state regarding page fetch efficiency.
What can hurt more than index rebuilds is out of date statistics. At my old place we stopped doing regular traditional index rebuilds but did do daily stats updates.
What did concern us regarding indexes was extent fragmentation, so we set up automation to rebuild indexes with over 7k extent fragments. Overall this drastically reduced our index rebuild workload.
We already had database performance monitoring in place, and knew the execution times for business-critical queries before we moved to the new index maint strategy. Basically, performance remained unchanged. But we did save about 90 minutes on overnight DB maintenance.
Sure, if we had a query performance issue we sometimes scheduled an ad-hoc index rebuild to see what effect it had, but it was seldom noticeable.
So, it may sound DB heresy, but one good way to reduce the impact of index rebuild is to (mostly) stop doing them.
Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.
When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara
July 10, 2024 at 7:43 am
Hi All,
First of all, thank you for taking your time to reply on this post & also sorry for the late replies as I'm a bit tied up with other tasks. We do not have anyone which is expert on troubleshooting DB's query performances. Me myself manages the databases daily operations & I don't deep dive into specific query troubleshooting since I don't have much knowledge on it. The databases which I manages has existing index & stats maintenance job which created by Ola Hallengren, this job tends to run about once a month for index & weekly for stats usually it causes zero issues to our DB. However on certain databases which have exceptionally large tables the index rebuilds will generates huge Tlogs which our backups can't keep up, thus we exclude those tables from the monthly index maintenance jobs. We will only perform adhoc maintenance only when we have any system maintenance window declared.
JasonO wrote:My online index rebuild job for a 200G table with 40mil rows took me up to 30hrs to complete, this is a once off activity done since we have an allocated downtime for server maintenance.
I would want to know, is there any way for me to do it faster? I had an idea of inserting all the data from the current table to a new table. Then truncate the old table, this will wipe clean all the data and also indexes. Once this is done, I load back all the data from that new table to the old table, I know by doing this SQL will also rebuild all the indexes while the data is being loaded.
Will this be faster than I just rebuild the single index?
Don't do a thing, yet...
I rebuild a table that's roughly 1/3 of a Tera Byte with 5 NCI's that take it up to 475 GB. It is PAGE Compressed and so the Rebuilds take roughly twice as long (1:00 hour without for the CI, and 1:48 with the compression). It takes an additional hour for the 5 page compressed NCI's but the total of all that is way less (< 4 hours) than 30 hours and I have more that twice the size I'm working worth. The rebuild of the CI is actually done twice (worth it to not have ANY extra unused space when I'm done) and working on making that just a one way rebuild with no extra unused space (trick with file groups).
First, can you temporarily go to the BULK LOGGED Recovery Model?
If so, please post the CREATE TABLE statement for this table including all of the indexes including the index for the clustered index so I can figure things out a bit for you. It would also be helpful to know the index sizes and page densities are.
So, I might be able to help. 😀
Hi Jeff,
Thanks for taking time to reply, here the create table script which I've generated.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[tblFWT](
[D_ID] [bigint] NOT NULL,
[FWT_lot_ID] [varchar](50) NULL,
[FWT_Slot_ID] [varchar](50) NOT NULL,
[FWT_X] [smallint] NOT NULL,
[FWT_Y] [smallint] NOT NULL,
[FWT_bin] [varchar](10) NULL,
[FWT_Date] [smalldatetime] NULL,
[FileID] [int] NULL,
CONSTRAINT [PK_tblFWT] PRIMARY KEY NONCLUSTERED
(
[D_ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [RightDatePS]([FWT_Date])
WITH
(
DATA_COMPRESSION = PAGE
)
GO
ALTER TABLE [dbo].[tblFWT] SET (LOCK_ESCALATION = AUTO)
GO
July 11, 2024 at 5:39 am
Ok... good.
1. What is populating the [D_ID]? A SEQUENCE?
You have nullable columns and, because you have PAGE compression turned on, almost all of the columns (even the supposed fixed size SMALLINTs and the DATETIME) seem susceptible to expansive updates.
2. What is the pattern of population? Only INSERTs or INSERT followed by UPDATEs shortly after the INSERTs or ???
3. Do you have enough temporary disk space to build a copy of the table? I'm thinking the answer is a "YES" because you did an Index REBUILD but I'm trying to set you up for a slightly different way.
4. How many core are available?
5. If the database is in the FULL Recovery Model, can it temporarily be set to BULK LOGGED? If not in the FULL Recovery Model, then which is it in?
6. What was the average page density when you just before your last rebuild?
7. Why is the index maintenance necessary??? Does the code run faster after the rebuild than it did just before the the rebuild? Are you trying to recover disk space??? Or ???
Sorry for all the questions but these are most of the questions that I ask of myself when I think a monster index is going to need some maintenance.
And, I'm currently rebuilding a 289.3 GB table using the method I'm thinking of for you. It takes less than 2 hours, provided the table doesn't need access during the rebuild (no online... no reorg).
--Jeff Moden
Change is inevitable... Change for the better is not.
July 11, 2024 at 7:09 am
As the table is partitioned by a date - are records on "older" partitions ever updated/inserted/deleted? if not any index operation should be done on recent partitions only, which would speed up any operation as well as require less space.
I would also confirm if FWT_Date (partition column) can indeed be null - if not then make it so - if yes does this mean it is also updated at a later stage? this would cause data movement (from the partition 1 to whatever partition the date belongs to) which isn't good.
July 11, 2024 at 8:38 am
Ok... good.
1. What is populating the [D_ID]? A SEQUENCE?
You have nullable columns and, because you have PAGE compression turned on, almost all of the columns (even the supposed fixed size SMALLINTs and the DATETIME) seem susceptible to expansive updates.
2. What is the pattern of population? Only INSERTs or INSERT followed by UPDATEs shortly after the INSERTs or ???
3. Do you have enough temporary disk space to build a copy of the table? I'm thinking the answer is a "YES" because you did an Index REBUILD but I'm trying to set you up for a slightly different way.
4. How many core are available?
5. If the database is in the FULL Recovery Model, can it temporarily be set to BULK LOGGED? If not in the FULL Recovery Model, then which is it in?
6. What was the average page density when you just before your last rebuild?
7. Why is the index maintenance necessary??? Does the code run faster after the rebuild than it did just before the the rebuild? Are you trying to recover disk space??? Or ???
Sorry for all the questions but these are most of the questions that I ask of myself when I think a monster index is going to need some maintenance.
And, I'm currently rebuilding a 289.3 GB table using the method I'm thinking of for you. It takes less than 2 hours, provided the table doesn't need access during the rebuild (no online... no reorg).
Hi Jeff,
As I have no idea how is the behaviour for this particular application, I'll do my best to answer based on my observations.
July 25, 2024 at 6:40 am
All these are helpful:
DBASupport
August 13, 2024 at 10:01 am
This was removed by the editor as SPAM
August 13, 2024 at 10:03 am
This was removed by the editor as SPAM
August 14, 2024 at 4:03 pm
Lordy. My apologies. I just noticed that the table you posted has no Clustered Index.
I have to assume that you're not rebuilding the HEAP with Ola's Code, correct?
Also, you said you think the table suffers mostly INSERTs. Since it's a HEAP and the only Non-Clustered Index on it is based on an "ever-increasing" column, there should be no logical fragmentation nor any "physical" fragmentation (page density issues or forwarded rows). If that's true, doing any index maintenance on this "table" (it's a HEAP... not really a table because there's no Clustered Index) was a complete waste of time. Checking it all with sys.dm_dm_index_physical_stats() would have been the smart thing to do first.
That will also tell us how many forwarded rows there are, which will provide a clue as to whether or not it's "mostly INSERTs" or not.
Also be advised the if you REBUILD the "table" (HEAP), that will cause all non-clustered indexes to also rebuild because they rely on page numbers and those will if there is any "defragmentation" of the HEAP.
I'll also say that while Ola's good code is world class for functionality, the way people use it is usually totally wrong (5/30 method and uses REORGANIZE in general instead of properly). Most of the time, and especially because of the "ignorant" use of REORGANIZE, index maintenance actually perpetuates fragmentation in a lot of tables instead of preventing it.
You also have no proof that index maintenance is actually needed especially since, by nature of the settings you're probably using for Ola's code, you're testing for logical fragmentation instead of page density.
Again, Ola's code isn't the problem... how it's being used by most IS the problem. I understand though... Even I made the mistake of thinking what everyone (except Paul Randal) was calling a "Best Practice" for index maintenance was actually a "Best Practice" instead of it being the "Worst Practice" that it actually turned out to be for me (18 Jan 2016 is burned into my brain as the day where "Best Practices" killed me with massive blocking and became the start of a whole lot of testing and new understanding).
An example of the latter is, why do Random GUIDs have such a fragmentation issue? It's not actually because of their Random nature, which can be used to prevent ALL fragmentation quite literally for months. The real problem is with the way people do index maintenance on them... and a lot of their other types of indexes.
Also, measure performance on that large table you're talking about. If someone says to rebuild it, rebuild just the statistics first and see if performance improves for the next 2 days. Also, make sure you run sys.dm_db_index_physical_stats on it to get things like page density and fragment counts/sizes.
And, yeah... I mostly agree with what Ed Vassie had to say above with a touch of "It Depends" on each index... their fragmentation patterns are not all the same, especially but not limited to when "ExpAnsive Updates" occur.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 14 posts - 1 through 13 (of 13 total)
You must be logged in to reply to this topic. Login to reply