October 14, 2008 at 9:05 am
How long does it take to Rebuild Indexes on a table with 110Million rows. The stats for this table are shown below :
DBCC SHOWCONTIG scanning 'CumulativeTransactionDetailFS' table...
Table: 'CumulativeTransactionDetailFS' (1705773134); index ID: 1, database ID: 5
TABLE level scan performed.
- Pages Scanned................................: 3124155
- Extents Scanned..............................: 429214
- Extent Switches..............................: 2564239
- Avg. Pages per Extent........................: 7.3
- Scan Density [Best Count:Actual Count].......: 15.23% [390520:2564240]
- Logical Scan Fragmentation ..................: 35.74%
- Extent Scan Fragmentation ...................: 86.43%
- Avg. Bytes Free per Page.....................: 2519.3
- Avg. Page Density (full).....................: 68.87%
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
The table has 9 Indexes defined on it. I grabbed a copy of its backup and restored on the test environment. It took me 12 hours to Rebuild just on Index, the stats after Indexing are
DBCC SHOWCONTIG scanning 'CumulativeTransactionDetailFS' table...
Table: 'CumulativeTransactionDetailFS' (1705773134); index ID: 1, database ID: 5
TABLE level scan performed.
- Pages Scanned................................: 2168156
- Extents Scanned..............................: 271701
- Extent Switches..............................: 271700
- Avg. Pages per Extent........................: 8.0
- Scan Density [Best Count:Actual Count].......: 99.75% [271020:271701]
- Logical Scan Fragmentation ..................: 0.00%
- Extent Scan Fragmentation ...................: 54.18%
- Avg. Bytes Free per Page.....................: 60.4
- Avg. Page Density (full).....................: 99.25%
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
This will be done in the maitenance window,so what is the approximate time I should allocate for this considering the fact that I would be reindexing the entire Db this table is on with something like
DECLARE @TableName varchar(255)
DECLARE TableCursor CURSOR FOR
SELECT table_name FROM information_schema.tables
WHERE table_type = 'base table'
OPEN TableCursor
FETCH NEXT FROM TableCursor INTO @TableName
WHILE @@FETCH_STATUS = 0
BEGIN
DBCC DBREINDEX(@TableName,' ',90)
FETCH NEXT FROM TableCursor INTO @TableName
END
CLOSE TableCursor
DEALLOCATE TableCursor
Or can I skip this one table and Rebuild Indexes on every other table and then ReIndex this alone?
Thanks in Advance!!!
The_SQL_DBA
MCTS
"Quality is never an accident; it is always the result of high intention, sincere effort, intelligent direction and skillful execution; it represents the wise choice of many alternatives."
October 14, 2008 at 10:36 am
I think you've answered your own question... But in any case, I'd separate this out into 2 batches: run the smaller tables first, then the big tables. I'd run all of this in test to see how long it takes. How similar is your test environment compared to production?
_______________
bkDBA
0.175 seconds -- 10 year average margin of victory at the Daytona 500
212 Degrees
October 14, 2008 at 10:54 am
The_SQL_DBA (10/14/2008)
How long does it take to Rebuild Indexes on a table with 110Million rows.
How long is a piece of string?
The answer to your question depends on the server you're using, the IO subsystem, the amount of other activity in the DB, etc, etc. It's hard to predict.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 14, 2008 at 11:13 am
It took 4-5 hours to rebuild one Index(clustered)
The_SQL_DBA
MCTS
"Quality is never an accident; it is always the result of high intention, sincere effort, intelligent direction and skillful execution; it represents the wise choice of many alternatives."
October 14, 2008 at 11:18 am
The_SQL_DBA (10/14/2008)
It took 4-5 hours to rebuild one Index(clustered)
So test (12 hrs) is nearly three times as slow as production (4-5 hrs). What issue are you tyring to address?
_______________
bkDBA
0.175 seconds -- 10 year average margin of victory at the Daytona 500
212 Degrees
October 14, 2008 at 11:22 am
Batching the index job in two sets : Does that mean I issue multiple DBCC commands?
Gail :The test server is 4X16 on 2000 SP4, prod on 8X16 2000 SP4. We are doing it in the maintenane window, so there would be no load on the server against the load that was there in the test enironment.
What is the best strategy to rebuild indexes, like do I do it for key tables with diff fill factors or all tables with 100 fill factors?
Thanks in Advance!!!
The_SQL_DBA
MCTS
"Quality is never an accident; it is always the result of high intention, sincere effort, intelligent direction and skillful execution; it represents the wise choice of many alternatives."
October 14, 2008 at 11:29 am
The_SQL_DBA (10/14/2008)
Batching the index job in two sets : Does that mean I issue multiple DBCC commands?
Yes...just take your script above and make 2 copies. Set one to run on only the small tables and one to run on the big tables (assuming you know which tables are small/big via showcontig).
_______________
bkDBA
0.175 seconds -- 10 year average margin of victory at the Daytona 500
212 Degrees
October 14, 2008 at 11:32 am
Hi bkDBA: The issue is slowness of the system. We are got rid of 5-6 years of data and to ascertain if we actually have a performance boost we are doing a reindex on all the tables. I hope we are heading in the right direction or are we missing anything??
Thanks!!
The_SQL_DBA
MCTS
"Quality is never an accident; it is always the result of high intention, sincere effort, intelligent direction and skillful execution; it represents the wise choice of many alternatives."
October 14, 2008 at 11:34 am
I used this script on some smaller databases to reindex in the past
select getdate() as TotalStart
exec sp_MSForEachTable '
print ''?''
select getdate() as starttime
DBCC DBREINDEX(''?'')
select getdate() as endtime
print ''-----------------------------------------------------------------''
'
The_SQL_DBA
MCTS
"Quality is never an accident; it is always the result of high intention, sincere effort, intelligent direction and skillful execution; it represents the wise choice of many alternatives."
October 14, 2008 at 11:37 am
It seems like you are starting in the right place. For larger databases, I would also suggest looking at the location of your mdf/ldf files, tempdb size/location, and disk configuration (RAID level, # of controllers). Perhaps your database is large enough to warrant separating out some tables into their own ndf file and drive.
_______________
bkDBA
0.175 seconds -- 10 year average margin of victory at the Daytona 500
212 Degrees
October 14, 2008 at 11:41 am
All the mdf and ldf files are stored on separate RAID 10 drives with 50 spindles each. The backups again are on a separate RAID 5 array. I am not sure if we did disk alignment, which we would not be doing at this stage. Some of the data files are real big like 50G and there are a bunch of them that are stored on these drives.
Thanks!!
The_SQL_DBA
MCTS
"Quality is never an accident; it is always the result of high intention, sincere effort, intelligent direction and skillful execution; it represents the wise choice of many alternatives."
October 14, 2008 at 11:43 am
How do I set a variable fill factor, because the way I see it there are a bunch of tables that are static and then there more than a couple that are very active with DML's which need lower fill factor.
Thanks!!
The_SQL_DBA
MCTS
"Quality is never an accident; it is always the result of high intention, sincere effort, intelligent direction and skillful execution; it represents the wise choice of many alternatives."
October 14, 2008 at 10:55 pm
Maybe partition the table so you can manage smaller tables instead of one huge one.
--Jeff Moden
Change is inevitable... Change for the better is not.
October 15, 2008 at 8:43 am
How to view page spilts in a table? Also what is the desired value for % EXTENT FRAGMENTATION?
Thanks!!!
The_SQL_DBA
MCTS
"Quality is never an accident; it is always the result of high intention, sincere effort, intelligent direction and skillful execution; it represents the wise choice of many alternatives."
October 16, 2008 at 7:38 pm
Extent Fragmentation is unreliable... read about it in the BOL listing for DBCC SHOWCONTIG.
The best thing to go for, IMHO, is a very high scan density (95+) and a low (10-)Logical Fragmentation (which will frequently never reach zero). On many systems, Extent Fragmentation will actually show up as NULL.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 15 posts - 1 through 15 (of 15 total)
You must be logged in to reply to this topic. Login to reply