Index Statistics

  • hey there, I'm working on a legacy SQL7 database and have been using query analyser to build indexes and tighten SQL code throughout a rather large application. Now I'm getting to the point where I want to speed up inserts.

    I was wondering if there was a way to find out how much the indexes (mine and the legacy) are being hit so I can drop old indexes I don't need anymore so my inserts and updates don't need to update so many objects. Is there a way get hit statistics on the indexes? I'm thinking there's got to be some sort of system table query to take care of this, but my research has turned up nil.

    thanks

  • i'd recommend using query analyser to look at your insert execution plans, then use a product like NETiq to find out what SQL is being run on your server. that way you can have a look to see what indexes are being used.

    somewhere along the line you'll be able to find out what indexes you might be able to drop.

    MVDBA

  • Slight variation on Mike's answer...

    Use Profiler to capture typical queries being run. Take the most used and run them in query analyser with SHOWPLAN_TEXT turned on. The results should show what indexes are being hit the most.

    You will probably want to go on more than just raw statistics. By knowing the application you should know what indexes make them most sense. I think mainly you want to 'narrow' the indexes on your longest tables that are most time critical for inserts.

    If you don't find easy solutions, consider an OLAP solution or batch copying denormalized tables for reporting usage only. You can load these with indexes optimized for reporting, but let the application still run fast with the narrow indexes.

    Aaron

  • yes,

    i agree on the latter points there - data Marts (or warehouses) should give you nice fast reports based on snapshot data, whilest freeing up your tables for faster writes.

    if you can't do that then how about the following.

    I also take it you have placed your T-Logs on seperate drives, that should increase write performance. then how about placing your indexes on a seperate filegroup on a new array as well ???

    these should free up hard disk heads to improve performance.

    MVDBA

  • I wanted to follow up and thank you guys for the suggestions and let you know what I did.

    We've upgraded to SQL2000, and are getting a better handle on what are the most frequent queries are and how to Index for them. I'm still looking for a way to get rid of legacy indexes that are no longer used without slowing things down with profiling, etc. I know the server internally keeps stats on table and column usage to help the optimizer and I can't believe there isn't a product out there that lets a DBA take a gander at these stats as it would seem to be terribly useful in tuning situations such as this.

    Although I agree about the data warehousing, etc. We simply don't have the time nor the resources to implement such a solution in the near term. We have upgraded the box and the server software and have temp db, table space, index space and logging space all on different disks now which always seems to help performance a great deal.

    All in All, for such an organically grown legacy project, we are faring quite well. At some point, now that all the business rules are well known (often ‘discovered’ in the process of creating an app) as well as what reports are really needed and which are not (one off’s) and a decent ad hoc reporting mechanism, it would be nice to sit down and redesign the entire thing from scratch, both the app and db, aiming in the direction of scalability. Of course I realize I’m probably dreaming on this last bit, as everybody knows once something is in operation it takes an act of gawd to stop the feature creep with the program managers saying “whaddya mean ‘code freeze’ for an entire quarter?!?!?!” But one has to dream the dream, eh?

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply