November 7, 2009 at 11:21 pm
Wow but that's a messy procedure. I'm not even going to start listing the things that raise red flags for me.
To answer the original question as best as I can without seeing multiple actual execution plans, I would just add OPTION (RECOMPILE) to the problem statement and see how you go. If that fixes the immediate issue, you can then dedicate your time to sorting out the many other problems that procedure has.
Paul
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
November 8, 2009 at 6:06 pm
Paul and guys ...
I can not run that sp again ... long history short, I am not authorized. And honestly, will generate similar execution plan. THe thing is , on Dev, we do not have same amount of rows.
My main question here is or was, for the query plan I attached, how can avoid or minimize the indexes updates; if you take time and check the graphical representation, you will see an Index update. That's the portion I want to fix.
By the way, I did not write that messy code 🙂 I am the DBA, but the recompile option is a good idea, need to check ....
November 8, 2009 at 9:27 pm
Yes please do try the RECOMPILE option - there's a good chance that the problem you get is down to parameter sniffing. Having up to date statistics will also help the optimizer choose a good plan - of course.
Unless circumstances are such that you are able to drop the nonclustered indexes before the update and re-create them afterward, there's possibly not much to be done about that aspect. Given the number of rows and indexes, a wide update plan is probably optimal.
Part of the reason people would want to see an actual execution plan from a good run and one from a bad run, is to see what changed. It is more often a poor choice of plan for the read part of the update than the write part, but it is difficult to say without the details. Maybe a bad run chooses a non-parallel plan, or one with a loop join and a huge table spool...who knows? You can cut down the chances of a bad plan by ensuring that the tables concerned have up-to-date statistics, as I said, and useful indexes.
Paul
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
November 9, 2009 at 9:22 am
I will go with the others and posit parameter sniffing/plan caching as the culprit here. OPTION (RECOMPILE) or dynamic sql as a solution.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
November 9, 2009 at 3:02 pm
TheSQLGuru (11/9/2009)
I will go with the others at posit parameter sniffing/plan caching as the culprit here. OPTION (RECOMPILE) or dynamic sql as a solution.
Good to see you back!
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
November 9, 2009 at 4:01 pm
Paul White (11/9/2009)
TheSQLGuru (11/9/2009)
I will go with the others at posit parameter sniffing/plan caching as the culprit here. OPTION (RECOMPILE) or dynamic sql as a solution.Good to see you back!
Good to be back! PASS was INCREDIBLY intense this year. Numerous public and private engagements and meetings as well as quite a bit of time spent with Microsoft and the SQL Server Insider sessions. A pleathora of very interesting and informative sessions too! Tack on an overnight travel period to return home Saturday morning (wife was on call so really had to get back) and it made for one very exhausted Guru. 🙂
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
November 9, 2009 at 5:16 pm
The recompile option will force SQL to forget about the query plan and generates a new one each time the store procedure is gonna run. How that can improve the overall performance? Sorry, but I disagree.
Something I forgot to explain is that the store procedure runs automatically using Tidal; I can't and should not run it manually. If the sp got stuck or is still running after several hours (on normal days takes 25 min or so) the developer takes the call and run it from Management Studio.
So far no one has checked the query plan I attached but I believe is an statistics problem and a bad clustered index choice on col_detail table.
November 9, 2009 at 5:58 pm
jocampo (11/9/2009)
The recompile option will force SQL to forget about the query plan and generates a new one each time the store procedure is gonna run. How that can improve the overall performance? Sorry, but I disagree.
A plan which was optimal for one set of circumstances (parameter values) can be daft for a different set. RECOMPILE gives the optimizer a chance to produce a plan which is tailored to the specific parameter values on each call. The cost of recompiling such a simple statement is pretty darn small, especially compared to the cost of re-using a plan which is no longer appropriate.
jocampo (11/9/2009)
Something I forgot to explain is that the store procedure runs automatically using Tidal; I can't and should not run it manually. If the sp got stuck or is still running after several hours (on normal days takes 25 min or so) the developer takes the call and run it from Management Studio.
You have a test environment, yes?
jocampo (11/9/2009)
So far no one has checked the query plan I attached but I believe is an statistics problem and a bad clustered index choice on col_detail table.
Wrong. I for one checked it - how else could I know what a mess it is? The plan looks perfectly reasonable, given the circumstances. The problem is that it is an estimated plan. The actual plan used can be different. I've explained all that already.
It is a good strong possibility that parameter sniffing (look it up!) is to blame here, but you are of course free to ignore any good advice offered for free. That might not be smart, but it is an option.
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
November 9, 2009 at 6:27 pm
A plan which was optimal for one set of circumstances (parameter values) can be daft for a different set. RECOMPILE gives the optimizer a chance to produce a plan which is tailored to the specific parameter values on each call. The cost of recompiling such a simple statement is pretty darn small, especially compared to the cost of re-using a plan which is no longer appropriate.
It is not a simple statement, not in my opinion. If we are not dropping or altering indexes, I do not see the point of the RECOMPILE. I can still test that but I do not believe is the best approach. But I'm open to the idea, just that I disagree. I believe is better to address the main reason and see why the query takes so long some days and not others.
You have a test environment, yes?
Yes, we do, but we have never had that problem there. Amount of rows and workload is not the same. Not the ideal place to reproduce the problem.
Wrong. I for one checked it - how else could I know what a mess it is? The plan looks perfectly reasonable, given the circumstances. The problem is that it is an estimated plan. The actual plan used can be different. I've explained all that already.
It is a good strong possibility that parameter sniffing (look it up!) is to blame here, but you are of course free to ignore any good advice offered for free. That might not be smart, but it is an option.
Maybe I did not explain myself correctly? I did not request the actual execution plan via management studio, I can't, the sp runs via tidal, but the one I attached is from the MS-SQL cache when was running on Dev. Everything inside MS-SQL cache, should and must be actual execution plans or not? if s there, means it ran or it is running now. Or ... did someone change how MS-SQL works this year?:cool: .... the important point here is that such execution plan, like I said before, is NOT from production, is from Dev ...
November 9, 2009 at 7:12 pm
Boy, where to start with this:
1)
The recompile option will force SQL to forget about the query plan and generates a new one each time the store procedure is gonna run. How that can improve the overall performance? Sorry, but I disagree.
You can disagree all you want, but you will be absolutely and indisputably wrong to do so. 🙂 Let me 'splain how this works: You have a billion row table with a field whereby 50% of the rows have a single value for some field, call it field1. every other value for field1 is unique. field1 is indexed with a nonclustered index. You have a SELECT statement that specifies a WHERE clause where field1 = oneoftheuniquevalues. So the optimizer on this FIRST EXECUTION OF THE QUERY correctly picks an index seek and bookmark lookup for it's query plan. Badda-bing, badda-boom, done in ~10 IOs and a few milliseconds. That plan is CACHED. Next execution of same SELECT has a WHERE clause with field1 = the50%ofrowsvalue. Cached plan is used. OOOPPPPSSIIIEEE!!! index seek, bookmark lookup for 500 MILLION ROWS . . . hours and a kaJILLION IOs later you get your results. Get it now?? Yes, you will pay the compile piper for each execution and many of your queries will run a teensy bit slower due to having to compile the plan each time. But you avoid the DISASTEROUS multi-hour execution scenarios. This exact same scenario is played out on these and other forums EVERY SINGLE DAY. :w00t:
2)
Yes, we do, but we have never had that problem there. Amount of rows and workload is not the same. Not the ideal place to reproduce the problem.
then the answer to Paul's original question about your having a test environment was "NO, I do not!" 😉 Don't feel bad, almost no one truly does - which is a damn shame.
3) We cannot advise you on query execution plan problems if you give us plans from a DIFFERENT environement than the one that has the problem!! If you run a query against a table with 10M rows in production and the same query against a simple dev table (on completely different hardware) with only 1000 rows, how can that be meaningful? See number 2 above. :hehe:
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
November 9, 2009 at 7:26 pm
You can disagree all you want, but you will be absolutely and indisputably wrong to do so. 🙂 Let me 'splain how this works: You have a billion row table with a field whereby 50% of the rows have a single value for some field, call it field1. every other value for field1 is unique. field1 is indexed with a nonclustered index. You have a SELECT statement that specifies a WHERE clause where field1 = oneoftheuniquevalues. So the optimizer on this FIRST EXECUTION OF THE QUERY correctly picks an index seek and bookmark lookup for it's query plan. Badda-bing, badda-boom, done in ~10 IOs and a few milliseconds. That plan is CACHED. Next execution of same SELECT has a WHERE clause with field1 = the50%ofrowsvalue. Cached plan is used. OOOPPPPSSIIIEEE!!! index seek, bookmark lookup for 500 MILLION ROWS . . . hours and a kaJILLION IOs later you get your results. Get it now?? Yes, you will pay the compile piper for each execution and many of your queries will run a teensy bit slower due to having to compile the plan each time. But you avoid the DISASTEROUS multi-hour execution scenarios. This exact same scenario is played out on these and other forums EVERY SINGLE DAY. :w00t:
If the statistics problems is because a bad clustered index choice, instead of doing recompiles every time, is not better if we avoid that problem changing the current index to a different one? That is my point.
then the answer to Paul's original question about your having a test environment was "NO, I do not!" 😉 Don't feel bad, almost no one truly does - which is a damn shame.
Why you want me to say we don't have a Dev environment when we do have one! And by the way, I don't feel bad, I'm the dba, not the developer 😉 ... and YES, WE DO HAVE A TEST environment. App team test their application there with sample data. It is truly a development environment. You will never have same workload on a non production box. If you have two, you are a magician, and instead of a Dev environment you have two exact production boxes.
November 9, 2009 at 8:46 pm
If the statistics problems is because a bad clustered index choice, instead of doing recompiles every time, is not better if we avoid that problem changing the current index to a different one? That is my point.
In my entire spiel about parameter sniffing did you see any mention at all about statistics?? And other than saying that field1 had a nonclustered index on it I didn't get into any indexing details or comparisons either. PLEASE do yourself a favor and read everything you can find (and there is PLENTY!!) about parameter sniffing and plan cache issues until you really grok what I said in my paragraph. It is CRITICAL to your understanding why some queries simply MUST be RECOMPILED or done with dynamic SQL if you want to avoid your exact issue - a query that runs fine some or most of the time and goes out to lunch at other times.
Also, I don't recall saying you don't have a DEV environment. I said you don't have a TEST environment, which from my perspective is an environment that mimics production VERY closely. This allows proper evaluation not only of code correctness, but also it's scalability, concurrency issues, performance troubleshooting, etc. None of the latter can be done on a DEV system, which is universally low-quality (actually now mostly virtualized) hardware and low data volumes when compared to production.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
November 9, 2009 at 9:03 pm
jocampo (11/9/2009)
Everything inside MS-SQL cache, should and must be actual execution plans or not? if s there, means it ran or it is running now. Or ... did someone change how MS-SQL works this year?:cool:
Your understanding is rather less complete or correct than you might imagine.
Read and thoroughly digest This TechNet White Paper before making such bold statements. Yes, all of it.
For the purpose of the current discussion, I would encourage you to pay particular attention to the differences between query plans and execution contexts, and also to look at the section of parameter sniffing and statement-level recompilation. You might be surprised.
Paul
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
November 9, 2009 at 10:00 pm
TheSQLGuru (11/9/2009)
If the statistics problems is because a bad clustered index choice, instead of doing recompiles every time, is not better if we avoid that problem changing the current index to a different one? That is my point.
In my entire spiel about parameter sniffing did you see any mention at all about statistics?? And other than saying that field1 had a nonclustered index on it I didn't get into any indexing details or comparisons either. PLEASE do yourself a favor and read everything you can find (and there is PLENTY!!) about parameter sniffing and plan cache issues until you really grok what I said in my paragraph. It is CRITICAL to your understanding why some queries simply MUST be RECOMPILED or done with dynamic SQL if you want to avoid your exact issue - a query that runs fine some or most of the time and goes out to lunch at other times.
Also, I don't recall saying you don't have a DEV environment. I said you don't have a TEST environment, which from my perspective is an environment that mimics production VERY closely. This allows proper evaluation not only of code correctness, but also it's scalability, concurrency issues, performance troubleshooting, etc. None of the latter can be done on a DEV system, which is universally low-quality (actually now mostly virtualized) hardware and low data volumes when compared to production.
We have two environments besides production.
I really appreciate the time you and others initially spent replying and supposedly ... helping me ... but I will consider my own thread as closed since now.
I will not use my time and energy boosting others egos or engaging on discussions that do not help on my issue at all.
Thanks,
PS: I have never considered anyone who called himself a guru a really knowledgeable person ... you really need more than that to be one ...
November 10, 2009 at 1:56 am
jocampo (11/9/2009)
I will not use my time and energy boosting others egos or engaging on discussions that do not help on my issue at all.
Leading horses to water spings to mind 😀
jocampo (11/9/2009)
PS: I have never considered anyone who called himself a guru a really knowledgeable person ... you really need more than that to be one ...
Perhaps his MVP tag, superior knowledge and experience would help you change your mind...?
If you knew his real name and background, you might be keener to learn from him.
I can only hope that your attitude matures over time.
Good luck, sincerely.
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
Viewing 15 posts - 16 through 30 (of 33 total)
You must be logged in to reply to this topic. Login to reply