September 21, 2009 at 6:19 am
Hi All
I have a non-clustered composite index (of two non-key fields) in SQL 2005 and am requiring to run Update Stats on this index more and more frequently to prevent queries from timing-out. Each time, issuing an update statistics (fullscan also specified) command for this index resolves the performance problem.
At one point the query was slow again after a mere 45 mins after issuing an Update Stats command for this index. Upon investigation I found that only some 400 new records were created in this 45 min interval. The table has some 2 million records. Thus a mere 0.02% change in the table's data, seemingly, has caused the stats to age. I appreciate that these changes do not qualify for an automatic update by SQL Server but find it hard to believe that the stats could be outdated and the query to perform so poorly, so soon. Especially as the execution plan is identical before and after the Update Stats command and the specified index is being used in both cases.
Is this possible (for stats to be outdated/performance to diminish so soon) given the above or is something else going on ?
Also sp_updatestats() has a RESAMPLE option that uses an inherited sampling ratio. How can I tell if a particular set of stats were created using Fullscan or whether the default sampling was effected ?
My understanding is that when an index is created the stats are sampled using a Fullscan. Therefore this implies that RESAMPLE option should be used in any invocation of sp_updatestats otherwise the default sampling (sampling of enough rows for stats to be meaningful) is going to be applied to indexes as well. Can anybody confirm that my understanding is correct ?
Many thanks
Preet
September 22, 2009 at 3:01 am
Try to automatically update the statistics - see BOL ---> Using Statistics to Improve Query Performance.
Go to the "tables" node (for a db) in SSMS, expand the "statistics" node, right click on a statistic and go to "properties" ---> "details". Scroll to the right and you can see info about "rows" and "rows sampled". Details are not available for the auto created statistics ("_WA_Sys").
September 22, 2009 at 11:32 am
I think you are more likely seeing an issue with cached execution plans not being the best plan for the query than an issue with statistics. What is a great execution plan for one set of criteria may be a poor choice for another set based on the distribution of data. Updating statistics causes queries to be recompiled so you would likely see better performance when the query is first run because it is getting a better execution plan for that criteria.
Jack Corbett
Consultant - Straight Path Solutions
Check out these links on how to get faster and more accurate answers:
Forum Etiquette: How to post data/code on a forum to get the best help
Need an Answer? Actually, No ... You Need a Question
September 22, 2009 at 12:07 pm
I'd also be more inclined to believe that it's either a parameter sniffing problem, or a case of where you really need two different execution plans.
Have you tried running it with "with recompile"?
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
September 22, 2009 at 12:27 pm
We had that issue on lots of stored procs on our Report servers. Basically what is happening (From what I have seen) is that these Stored procs will have IF statements and based on the conditions it will execute one set of query instead of another. This was happening in the SQL 2000 world. To solve that, we removed the If Statements and created different stored procs. The application checked the business rule and called the right stored proc. This reduced it. The old stored procs really needed different execution plans for each IF Statement. So we made it easier by creating different stored procs.
But that might not work in your case. 🙂
-Roy
September 22, 2009 at 8:14 pm
GSquared (9/22/2009)
I'd also be more inclined to believe that it's either a parameter sniffing problem, or a case of where you really need two different execution plans.Have you tried running it with "with recompile"?
Hmm, that's the same thing that I thought when I read the OP. I'm just a little behind you Gus ... 🙂
[font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
Proactive Performance Solutions, Inc. [/font][font="Verdana"] "Performance is our middle name."[/font]
October 12, 2009 at 9:46 am
Spent some time investigating this (in test & live environments) and researching how execution plans are cached and what Parameter sniffing is, hence the delay.
The problem was as suggested, Parameter sniffing. At first I didn't think it was Parameter sniffing as the index was designed so that it would be highly selective (composite index on PostCode and FirstName). However there were occasions where users would enter abbreviations for these fields such as "B15" for postcode or "S" for firstname. In the latter case, use of this index was not optimal as a new execution plan would use a different index altogether.
The sproc runs many statements using a series of IF ELSE to determine which to execute so I added a query hint (OPTION (RECOMPILE)) to the end of the one that was problematic. Although this particular query is recompiled each time it had a minimal effect on resources and it is certainly preferable to the timeouts & knock-on effects caused.
The good thing about SQL 2005, is that the whole sproc does not need to be recompiled, rather, just statement-level recompilation.
Useful query that I found on my travels ; http://blogs.techrepublic.com.com/datacenter/?p=270
Also a variation of that query would be to use the DMV sys.dm_exec_cached_plans in conjunction with sys.dm_exec_query_plan but is not as informative as the first.
Useful MS References ; http://msdn.microsoft.com/en-us/library/ms181055(SQL.90).aspx , http://technet.microsoft.com/en-gb/library/cc966425.aspx
Thanks to all who posted.
October 12, 2009 at 8:34 pm
Glad we could help, Preet, and thanks for the follow-up!
[font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
Proactive Performance Solutions, Inc. [/font][font="Verdana"] "Performance is our middle name."[/font]
October 13, 2009 at 8:43 am
Preet_S (10/12/2009)
Spent some time investigating this (in test & live environments) and researching how execution plans are cached and what Parameter sniffing is, hence the delay.The problem was as suggested, Parameter sniffing. At first I didn't think it was Parameter sniffing as the index was designed so that it would be highly selective (composite index on PostCode and FirstName). However there were occasions where users would enter abbreviations for these fields such as "B15" for postcode or "S" for firstname. In the latter case, use of this index was not optimal as a new execution plan would use a different index altogether.
The sproc runs many statements using a series of IF ELSE to determine which to execute so I added a query hint (OPTION (RECOMPILE)) to the end of the one that was problematic. Although this particular query is recompiled each time it had a minimal effect on resources and it is certainly preferable to the timeouts & knock-on effects caused.
The good thing about SQL 2005, is that the whole sproc does not need to be recompiled, rather, just statement-level recompilation.
Useful query that I found on my travels ; http://blogs.techrepublic.com.com/datacenter/?p=270
Also a variation of that query would be to use the DMV sys.dm_exec_cached_plans in conjunction with sys.dm_exec_query_plan but is not as informative as the first.
Useful MS References ; http://msdn.microsoft.com/en-us/library/ms181055(SQL.90).aspx , http://technet.microsoft.com/en-gb/library/cc966425.aspx
Thanks to all who posted.
One more point, if the code goes through a series of IF statements and runs different queries, that proc is going to get lots of recompiles, all on it's own. You'd be better off breaking each individual statement out into seperate procs and then calling them from the IF tree.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
November 9, 2009 at 10:35 am
RBarryYoung -
I like to feedback to those that were good enough to spend the time to assist. It certainly helps to see if the suggestions we make resolved the problem.
Grant Fritchey -
SQL 2005 performs statement-level recompiles and this seemed to be evident with the some of the views that I used to see what had been cached. Are your comments (i.e. whole of sproc recompilation) more related to SQL 2000 ?
You are right, the sprocs that we have contain many IF statements and run queries based upon those checks. Thanks for the tip re : IF tree.
November 9, 2009 at 11:49 am
Preet_S (11/9/2009)
RBarryYoung -I like to feedback to those that were good enough to spend the time to assist. It certainly helps to see if the suggestions we make resolved the problem.
Grant Fritchey -
SQL 2005 performs statement-level recompiles and this seemed to be evident with the some of the views that I used to see what had been cached. Are your comments (i.e. whole of sproc recompilation) more related to SQL 2000 ?
You are right, the sprocs that we have contain many IF statements and run queries based upon those checks. Thanks for the tip re : IF tree.
No, I was referring to statement recompiles. They're not as painful as the proc recompiles, but they're hardly free.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
November 10, 2009 at 2:08 am
So the problem here was the usual sniffing/recompile thing. I just want to add that sometimes problems are genuinely caused by the statistics histograms not containing any entries to cover new data added in a monotonically-increasing fashion.
For example, if a table has an index on a column which always increments with insertions, the histogram will very quickly not cover the new range at all. This can lead the optimizer to estimate that only one row will be selected by a predicate the selects just new rows added since the last statistics update. On larger tables, the formula applied toauto-update statistics is quite conservative, so this scenario is actually pretty common.
2005 SP1 added a couple of trace flags which can be useful in certain cases of this type. See http://blogs.msdn.com/ianjo/archive/2006/04/24/582227.aspx for brief details. The new behaviour certainly isn't for free, and can cause problems of its own, but its worth knowing about.
Paul
November 26, 2009 at 1:02 pm
Preet_S (11/9/2009)
RBarryYoung -I like to feedback to those that were good enough to spend the time to assist. It certainly helps to see if the suggestions we make resolved the problem.
...
I couldn't agree more Preet. 🙂
[font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
Proactive Performance Solutions, Inc. [/font][font="Verdana"] "Performance is our middle name."[/font]
Viewing 13 posts - 1 through 12 (of 12 total)
You must be logged in to reply to this topic. Login to reply