October 14, 2010 at 11:38 am
So apparently my GoogleFu is failign me at the moment. A while back I remembered reading a thread where Gail mentioned that an index rebuild only was worthwhile if the index had a certain threshold of pages (my memory is telling me 25 or 1000, but I'm not certain). I'm in the process of tweaking my index maintenance strategy for one of my databases and was looking for any references to those thresholds so that I might do some testing and try to incorporate them into my maintenance processes. Any references where I could read more about this would be very helpful.
Thanks in advance.
-Luke.
October 14, 2010 at 11:45 am
Both 😀
Under 25 pages you're likely to get little-no change in fragmentation because of the way the first few pages are allocated.
1000 pages is a rough guesstimate as to the point at which you're likely to see a performance improvement from rebuilding.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 14, 2010 at 1:00 pm
Good to know that my memory still works occasionally, and I appreciate the quick response. So to reiterate so that I make certain that I'm understanding... the < 25 pages bit is more about the fact that once it's de-fragmented once it's not going to get any better because of the storage mechanism. Add to the fact that it's so small it'll probably remain in memory making fragmentation more or less moot anyhow. Would you say it's safe to remove any indexes with less than 25 pages from my maintenance tasks no matter how fragmented they are?
The 1000 page recommendation for when performance increases might be noticed... is there somewhere I can read more about it and/or find some sample code so that I can test against my particular hardware?
Thanks in advance,
-Luke.
October 14, 2010 at 1:12 pm
Luke L (10/14/2010)
Would you say it's safe to remove any indexes with less than 25 pages from my maintenance tasks no matter how fragmented they are?
Sure, and I'd set the threshold a fair bit higher than that.
The 1000 page recommendation for when performance increases might be noticed... is there somewhere I can read more about it and/or find some sample code so that I can test against my particular hardware?
Maybe on Paul Randal's blog. He's the one who came up with that number. It's an educated guesstimate based on his knowledge of the storage engine and how it fetches pages.
Bear in mind that fragmentation's only an issue when
1) You're doing a range scan or full scan of an index
2) The pages are not in memory
I suppose you could set up some tests by creating fragmented tables of various sizes, testing how long a select * takes when the cache is cold, rebuilding them and testing again with a cold cache. Numbers are likely to be hardware-specific.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 14, 2010 at 2:22 pm
Numbers are likely to be hardware-specific.
Yup, that's what I was thinking. Just was hoping to find a baseline that I can play around with. Perhaps I'll do a bit of testing and finally have something to write about.
Bear in mind that fragmentation's only an issue when
1) You're doing a range scan or full scan of an index
Yeah, I'm working through a decent bit of that. The previous Developer/DBA didn't really have that good a grasp on what proper indexes were, so there's tons of scans (inside of cursors inside of cursors...) that I'm working to optimize. There's a lot that needs to be fixed. Today was the index maintenance jobs, tomorrow it's more performance issues...
Thanks for the info.
-Luke.
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply