April 4, 2017 at 2:36 am
I have a long running query to improve.
I run DROPCLEANBUFFERS then the query- the execution plan gives a subtree estimated cost of 22, and takes 6 seconds and it recommends a new index.
I add the index - re-run DROPCLEANBUFFERS and the query - the execution plan gives a subtree estimate of 12, and takes 4 seconds.
But the column it recommends an index for is mainly null,
I change the index to a filtered index (where column is not null)
I re-run DROPCLEANBUFFERS and the query - the execution plan gives a subtree estimate of 25, and takes 1 second - but there is more 'Parallelism' usage with the filtered index
The behaviour is consistent. I can reverse the procedure, by dropping the index, and return to the original cost and timings, then re-add the index and get the performance gains again.
I am not sure which would be the best version to put into production.
The filtered index - which gives faster performance - but costs more, and uses more Parallelism
or the unfiltered index which gives a smaller performance gain - but costs less.
April 4, 2017 at 2:53 am
If you're not processor bound, then go with the filtered index. There's no point in having lots of processors and not using them. You'll also want to consider the cost of maintaining the index - how often is the table updated (including deletes and inserts) compared to how many times your query (and any other queries that may use the index) will run?
John
April 4, 2017 at 3:18 am
Costs != times
Costs are unitless values that are solely used for the Query Optimiser to decide what plans are cheaper than others. They're not a measure of CPU usage, not a measure of duration.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply