October 18, 2011 at 12:18 am
I am new in DBA. One of my table which is highly used for insertion and updation is fragmented very fast. Please advice what are the steps to be taken to resolve the issue.
Regards
October 18, 2011 at 4:33 am
You need to regurlarly assess the fragmentation and Reorganize or Rebuild the indexes
I would also recommend updating the statistics, rebuild indexes does update the statistics but only on the indexed columns.
October 19, 2011 at 6:11 am
Also, you may want to assess which columns are defining the clustered index. If it's fragmenting that much, maybe you need a different cluster, possibly. Also, you need to evaluate the mechanisms of data update. Why is it fragmenting so much. Are you deleting excessive amounts of data?
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
October 19, 2011 at 6:13 am
Fill factor is also 1 of the next logical steps.
100% FF will definitely shoot you in the foot fast with that type of usage.
Here in a fairly well designed app I can get away with 85% pretty much across the board with very few exceptions. This is a shotgun approach but with 30K tables and only 20 GB of data I could use that option.
October 19, 2011 at 6:15 am
And of course the script to solve the fragmentation once it happened :
http://sqlfool.com/2010/04/index-defrag-script-v4-1
Further explainations on how I used it to figure out the FF :
http://www.sqlservercentral.com/Forums/Topic1126147-146-1.aspx
October 19, 2011 at 6:18 am
Ooh, right, fill factor. Where's my head. Thanks Ninja!
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
October 19, 2011 at 6:19 am
Grant Fritchey (10/19/2011)
Ooh, right, fill factor. Where's my head. Thanks Ninja!
I have to put it in the briefcase or something. Must be the first time it happens this year :-).
Gail must've saved my arse at least 50 times this year so I think we can let it slide ;-).
October 19, 2011 at 8:18 am
Right fill factor will definitely help...but ensure to test the fill factor value. if the fill factor value is too low eg. 50% then the number of pages taken to store the data will increase. This will affect the read performance on the table since scans will have to go through more number of pages.
October 19, 2011 at 8:25 am
Easy way to figure it out :
Check the current avg space used in the page in sys.dm_db_index_physical_stats .
Then drop the fill factor below that figure by 5-10 points.
In my case the actual % page space used was around 93% on the 100% FF.
So I dropped that to 85% and that pretty much killed the vast majority of page splits.
Also that tells you that the cost increase is actually 5-7% excluding the savings of no page splits.
So that tells you how much log you need to run the index rebuild, how much extra space for the data files and for the backups.
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply