August 13, 2008 at 2:31 pm
Ok, I some stored procedures that occasionally take a long time to run. It's not something I've been able to duplicate, but I've got other reports that track how long something took to process and I'm seeing that a certain group of stored procedures that access similar data seem to have this problem sometimes.
The stored procedures in question all join several tables which all have the nolock hint attached. The only thing I can think of is, there are two joins to some table-value user defined functions. And these UDF don't implement any nolock table hints, and the tables they access could easily be in use and causing me to wait.
So is there a way I can make the UDF not lock the tables they read from? Or do I need to bite the bullet and write my own UDF that implements nolock?
August 13, 2008 at 3:26 pm
First off, keep in mind that NOLOCK is equivelant to READ UNCOMMITTED isolation so you are reading dirty data. There's a bunch of problems that can come from this. I'm not going to go into them here, but be warned. You may already be OK with that, but it's worth mentioning.
You can get the same effect as using the NOLOCK hint across the entire procedure by setting the isolation level to READ UNCOMMITTED before the query runs. Also, since you're on SQL Server 2005, you may want to look into using the new optimistic isolation levels READ COMMITTED SNAPSHOT in particular.
August 14, 2008 at 5:40 am
Rather than changing the locking mechanisms everywhere, I'd strongly suggest looking at the execution plans of the query and the indexes on your tables. Resolve the problems at the root, not through potentially dangerous manipulation of the system through locking hints.
Also, are the table valued functions inline or multi-statement. Multi-statement don't carry statistics which means they always resolve as one row tables. If they actually are multi-row tables, this can lead to profound performance issues, blocking and deadlocks. Inline UDF's usually resolve in a manner similar to views and you can even see this in the execution plans.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
August 15, 2008 at 9:31 am
I realize the data will be dirty, and it's ok since the data is merely being used for a report to the user, not in any actual transactions or calculations.
The table valued functions return only one row.
I can't think of it being anything else but locking issues, since the same procedure will run one time and take 15 seconds, and then the next time it runs it takes 180
August 15, 2008 at 11:05 am
in addition to following the recomendations from Grant ( which nail the problem right at its core) I would suggest you to check if "cross apply" or outer apply help you there instead of the "function".
* Noel
August 18, 2008 at 9:08 am
I definately appreciate grant's commentary on the dangers of these table hints, but I guess I don't see how they could be an issue in my situation. Dirty data is not an issue since I'm not using this data for anything else except a report to the user.
The function's in question don't implement nolock, since I know this function is also used with important transactions, so they wouldn't implement no lock hints with it, since it has to be accurate most of the time, just not in my case. Hence I wouldn't edit the function itself.
This is the way I'm joining the table-value function to my procedure, maybe I'm doing something wrong here?
OUTER APPLY
dbo.fn_Tax_Breakout(Dealworksheet.worksheet_ID) tax_break
OUTER APPLY
dbo.fn_ContractFee_Breakout(Dealworksheet.worksheet_ID) contractfee_break
I'm no SQL pro, but when a procedure runs once and takes 5 seconds, and then later on, it's passed parameters that should return even a smaller result set, and the procedure takes 180 seconds, this doesn't sound like an index problem to me.
August 18, 2008 at 9:14 am
I'd have to see an execution plan for the good execution and one from the bad execution to even make a guess as to what's going on.
As to the locking, in addition to uncommitted data, you can actually get duplicate rows or have rows come up missing too. That might be important in a report.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
August 18, 2008 at 9:17 am
yeah, i'd love to duplicate a bad execution, the problem is I can't duplicate it. It just happens to users once in awhile. For instance the procedures in question were ran a total of 285 times this past weekend and all of them executed in six seconds or less. but last thursday one took 180 seconds.
Interesting on some rows not being returned due to a nolock hint, I didn't know that could happen.
August 19, 2008 at 6:57 pm
Do the procedures in question take input parameters that can vary widely in value, such as a date range? And if so, do the "bad execution" cases happen to have atypical values for these parameters?
If so you may be experiencing an inefficient cached execution plan being used due to "parameter sniffing". I've experienced it in situations like yours where the "bad" cases should be causing the proc to consider less rows but the runtime is far far worse than the "good" case which considers more rows. The "bad" case was simply atypical - a smaller date range that was infrequently used in a reporting proc where the majority of runs had a larger date range.
The WITH RECOMPILE option on the proc may be the simplest way to go - give it a try and see if it makes these bad cases go away. Unless the cost of the more frequent recompilations are critical to your setup, if this is your problem it's an easy fix.
Regards,
Jacob
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply