March 8, 2009 at 7:47 pm
we have a stored procedure that is periodically causing performance issues. When the procedure is flagged for recompile, it takes 3 - 4 minutes to compile. in Profiler I see a pattern with the EventClass and EventSubClass of
SP:CacheHit --- 1 - Execution Context Hit
SP:CacheHite --- 2 - Compplan Hit
SP:Recompile --- 1 - Schema changed
Once this Compplan Hit sub class happens, the recompile takes forever. I also see CPU utilization on the server hit 40 - 50 while recompiling. This server only has one processor but it is hyperthreaded. Do you think I'm suffering from parallelism? If so, I know I can add an MAXDOP hint to the query but does that help serialize the recompile or will I need to set MAXDOP at the server level? And I should mention that the procedure is join heavy in a big way. There's also some UDFs thrown in for fun.
March 9, 2009 at 6:08 am
We once had a query that was joining 80 tables. It took about two minutes to recompile and, because of the nature of the data, it was recompiling about every 12-15 minutes. During the recompile, all the processes that needed that query were hung. The only way to reduce recompile time is to reduce the complexity of the procedure.
According to our local systems architect, hyper-threading is not something you should be using on a SQL Server instance. I don't know if that's your problem or not, but I thought I'd pass the recommendation along.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
March 9, 2009 at 6:23 am
Try it in some other environments and see what it does (dev, stage, etc.). He's right - Hyperthreading is a no-no for SQL Server, although I'm not sure that simply compiling a stored procedure will be affected by this...it certainly might. I don't think that a Hyperthreaded box has been offered by, say, Dell, for a couple of years now. At any rate, see what it does elsewhere to start whittling away at the problem; doing this might tell you if it's the environment.
March 9, 2009 at 7:40 am
michael ham (3/8/2009)
we have a stored procedure that is periodically causing performance issues. When the procedure is flagged for recompile, it takes 3 - 4 minutes to compile. in Profiler I see a pattern with the EventClass and EventSubClass ofSP:CacheHit --- 1 - Execution Context Hit
SP:CacheHite --- 2 - Compplan Hit
SP:Recompile --- 1 - Schema changed
Once this Compplan Hit sub class happens, the recompile takes forever. I also see CPU utilization on the server hit 40 - 50 while recompiling. This server only has one processor but it is hyperthreaded. Do you think I'm suffering from parallelism? If so, I know I can add an MAXDOP hint to the query but does that help serialize the recompile or will I need to set MAXDOP at the server level? And I should mention that the procedure is join heavy in a big way. There's also some UDFs thrown in for fun.
It's recompiling because the schema has changed. This may be due to declaring temp tables partway through the procedure. At least, that's where I've run into this issue. See if moving all temp table declarations to the beginning of the proc stops the recompilations. At least then you'll only have to take the compilation hit once.
March 9, 2009 at 7:54 am
80 tables! Wow. I've known about the hyper-threading issues for a while, but never had an installation of this product really suffer from it. I think that's the next route I'm going to take. Thanks for the input.
March 9, 2009 at 7:59 am
Well I don't get to pick how this product is deployed, I've got 3000 installations of it worldwide. IBM is still selling smaller servers with these processors. I've been going back through code revisions in source (this is not the first revision of this procedure in production) and there have been some undocumented changes. I'm going to start picking those apart and see what's causing this and work from there. Thanks for the input.
March 9, 2009 at 8:07 am
Dean Cochrane (3/9/2009)
It's recompiling because the schema has changed. This may be due to declaring temp tables partway through the procedure. At least, that's where I've run into this issue. See if moving all temp table declarations to the beginning of the proc stops the recompilations. At least then you'll only have to take the compilation hit once.
Well I know it will recompile based on the table activity. I can't understand the 3 minute recompile though. I did catch other recompile events in my trace but only when this sequence above is encountered do we have problems. We are using a temp table and I think we can safely move to a table variable. So I might try that and see if it helps. Thanks.
March 9, 2009 at 8:21 am
michael ham (3/9/2009)
Dean Cochrane (3/9/2009)
It's recompiling because the schema has changed. This may be due to declaring temp tables partway through the procedure. At least, that's where I've run into this issue. See if moving all temp table declarations to the beginning of the proc stops the recompilations. At least then you'll only have to take the compilation hit once.
Well I know it will recompile based on the table activity. I can't understand the 3 minute recompile though. I did catch other recompile events in my trace but only when this sequence above is encountered do we have problems. We are using a temp table and I think we can safely move to a table variable. So I might try that and see if it helps. Thanks.
Be careful with table variables, there is some limitations as you cannot index them and it does not keep statistics : http://support.microsoft.com/default.aspx?scid=kb;en-us;305977&Product=sql2k
I don't know if your temp table table was indexed at all, but it could lead to severe performance issues.
I thought table variables were memory only, but recently found this article, and it proved me wrong. Just saying it is good to know.
Cheers,
J-F
March 9, 2009 at 8:32 am
You do need to be very careful about using table variables. But, if you're dealing with very small data sets that would be scanned anyway, regardless of statistics, then table variables can help you avoid the recompiles associated with temp tables. But you really do need to have very small sets of data. By that I mean less 100 rows at the extreme top-end.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
March 9, 2009 at 8:37 am
It's likely that we'd stay under a 100 rows 95% of the time. I've just been doing some side by side tests of a previous revision of the procedure and do not see the same recompile performance problem. the joins have changed quite a bit. I'm most curious now and will post results of my findings.
March 9, 2009 at 9:40 am
Ok I found it. We added a few columns to the results and did a self join on a view 3 times to get the data. But that's just the entrance to the rabbit hole. The view is built on another view and the column joined in the self joins is in down in the other view (which is probably built on other views). Blech. Thanks everyone for your comments.
March 9, 2009 at 9:46 am
See, you might get to 80 tables too.
Glad you found it.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
March 9, 2009 at 10:48 am
michael ham (3/9/2009)
Ok I found it. We added a few columns to the results and did a self join on a view 3 times to get the data. But that's just the entrance to the rabbit hole. The view is built on another view and the column joined in the self joins is in down in the other view (which is probably built on other views). Blech. Thanks everyone for your comments.
Urg .. one reason I hate views .. reminds of Microsoft Access .. I had to manage a application where they had built query on top of query of top of query. In some cases going down 10 levels; then they complained to me whey their application was slow grrr. Is there anyway in SQL Server to force it so Developers can't keep referecing view with in view with in view?
Mohit.
Mohit K. Gupta, MCITP: Database Administrator (2005), My Blog, Twitter: @SQLCAN[/url].
Microsoft FTE - SQL Server PFE
* Some time its the search that counts, not the finding...
* I didn't think so, but if I was wrong, I was wrong. I'd rather do something, and make a mistake than be frightened and be doing nothing. :smooooth:[/font]
March 9, 2009 at 11:05 am
There's nothing wrong with Views. It's how you use them that matters.
[font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
Proactive Performance Solutions, Inc. [/font][font="Verdana"] "Performance is our middle name."[/font]
March 9, 2009 at 12:01 pm
Aye 😉 .. Just reminds me of my horror days from MS Access programming heh.
I have yet to have a developer try to build view on top of a view ... well I had one developer I was able to talk him out of it o.O...
Mohit.
Mohit K. Gupta, MCITP: Database Administrator (2005), My Blog, Twitter: @SQLCAN[/url].
Microsoft FTE - SQL Server PFE
* Some time its the search that counts, not the finding...
* I didn't think so, but if I was wrong, I was wrong. I'd rather do something, and make a mistake than be frightened and be doing nothing. :smooooth:[/font]
Viewing 15 posts - 1 through 14 (of 14 total)
You must be logged in to reply to this topic. Login to reply