Index rebuild frequency

  • Our shop is serving a fair number of websites from an SQL/server database. We are currently rebuilding indexes early on a weekend morning. As expected, tables are unavailable during portions of this rebuild, which makes certain of our web sites unavailable.

    We do not need 24/7 access; however, we would like to minimize the amount of time that the sites are down. I have found lots of info on index design and rebuilds, but have not been successful in finding any situation-specific recommendations. For example, a table that has 10% of its rows deleted or inserted in a week should have the index rebuilt daily, weekly, monthly, or whatever. I'm probably missing something obvious, but if anyone could point me to reference info of this sort, I would greatly apprectiate it.

  • To really know when it is really necesary to rebuild my indexes I create a

    script that checks the result for dbcc showcontig command, read the Scan Density

    value, and according to the value it defrag or rebuild the index, or just don't do nothing.

    My db now is 100 Gb and we have lots of trans, but sometimes the biggests tables

    only are defragmented once a week or less

  • Thanks - I've seen 80% references as a sort of lower threshold for Scan Density - does that seam reasonable?

    Edited by - Dave Zahn on 05/05/2003 12:58:58 PM

  • Agree with 'dbcc showcontig' and I think we also used 80% as a threshold.

  • I would not let go below 80% if possible but it might help to segment older data to another table and join with a view if there is a way to ID records that should no longer be modified or at least not modified with a good frequency. This will mean shorter rebuild times. Also you don't say anything about SQL Version and if using 2000 using DBCC INDEXDFRAG does have a lot of merrit.

    As for situation specific recommendations it is hard to say, there are some generic things but not sure what best fits you.

    I have a few tricks I use to keep data live but may not meet your specific needs. However, if you can post more details on how you data tables working and what your current indexes are like it could help us a bit.

    Where go the data inserts for the clustered index mostly occurr? (such as index is autonumber and thus occurrs and end of data constantly, or is based on social security thus data inserts occurr throughout, etc)

    Also, if you have tables that are large and updated often then filegroups can help out a bit.

    Another thing people don't think of is defragging the Hard Drive. Keep in mind external fragmentation creates performance hits as well. The servers I have can be easily updated with the server live and a small performance hit during that time.

    And placement of high HD writes can make a difference. Such as log files or tempodb, helps to move to their own drive to decrease IO contention.

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply