May 23, 2017 at 6:39 am
I'm looking for a tool that can look at all of the indexes on a database and make solid recommendations for improvement.
Specifically where we have many single-column indexes that should be combined into multi-column indexes. I'm thinking a tool that can consume profiler traces might be best since, due to ORM queries, we clear our plan cache daily and lately the server gets restarted often enough to make index usage stats skimpy ( I do save that information to table regularly ).
I can find the long-running queries, those with high cost/reads/cpu/scans etc. I can find missing/duplicate indexes etc with things like Brent Ozar's BlitzIndex procedure. But with hundreds of indexes applied over many years ( mainly by development ), I'm willing to pay for a tool to speed up this process. Performance is always key, but some of this might be called cleanup as much as performance boost.
I tried feeding profiler trace files to the database tuning advisor and got a huge, unbelievable list of proposed changes.
May 23, 2017 at 7:33 am
I started this index analysis project after spending a number of months removing unused indexes and applying the highest value missing indexes ( mostly as identified by sp_blitzIndex when the server had been up for at least a month ).
Now I'm into more of a database-wide grind though all indexes. I started thinking I'd take a handful of slow queries that mainly hit one or two specific tables. This often led to the discovery that the real bottlenecks were on other tables in the query ( slight wild good chase ). My partner DBA here has been talking about indexing that narrows down the number of records sql has to consider, as early as possible in the execution plan ( along with whether to try putting columns in an INCLUDE first and only in the key columns if absolutely necessary ).
So now rather than just look for slow queries, I'm thinking I want to find those that have large number of records at the "far right" end of the plan ( along with big estimated/actual row variances ). How would I query the plan cache ( or trace ) for that?
May 23, 2017 at 7:49 am
I have yet to see a tool do this in any way that I would regard as successful. Microsoft is starting to do this with the machine learning work they're using in Azure SQL Database to automatically add & remove indexes, but it's not available outside of Azure at this time. There are a couple of tools that say they will do this, don't trust them.
As to how to query the cache, it's XML, just go to any of the example queries out there (there are a few on my blog), copy them and adjust to look for what you want. You'll need to compare high number nodes with the final values in the estimated row counts (you can't use actuals without running the queries everywhere). Usually, you're better off simply looking for scans, however, there are situations where you'll still see large volume seeks getting late filtered through other processes.
I'm not sure I'd pursue this approach as a systemized thing. Instead, identify the queries with high I/O. They're likely to also be the ones doing late filtering if that's a common problem on your system.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply