November 7, 2012 at 11:35 am
i have transaction isolation level set to read-uncommited but i am not sure which one really works though syntax wise both are correct?
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[my_sproc]
@ID INT,
AS
SET TRANSACTION ISOLATION LEVEL READ uncommitted
Or this one
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET TRANSACTION ISOLATION LEVEL READ uncommitted
Go
ALTER PROCEDURE [dbo].[my_sproc]
@ID INT
November 7, 2012 at 12:05 pm
The first one will do what you're looking for. Isolation levels are set for the connection, not for the object.
HOWEVER, I have to warn you that setting isolation to read uncommitted is almost always a really, really, really bad idea. There are very, very few situations where it is the correct thing to do, and almost none of them involve repeat-run code like a stored procedure.
What read uncommitted (and the query hint NoLock) do, is allow you to read wrong data from the tables involved. Data that has changed, data that is being deleted, even (in some case) the same data more than one time. The only time to ever use any of these things is when the accuracy of the data doesn't matter at all.
Make sure the business units or customers who will be using the application this stored procedure will be part of, understand that the data it presents to them may be wrong, potentially by an arbitrarily large amount. If they are fine with that, go ahead and use Read Uncommitted. If they ask you if you're crazy when you tell them you want the database to give them the wrong data, or make threatening noises about the potentially short future of your employment, tell them you were just joking and hope that they buy it.
It's your decision, of course. You know what's needed by your business users/customers better than I do, of course. But do take the potentially huge data flaws it will generate into account.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
November 7, 2012 at 12:12 pm
GSquared (11/7/2012)
The first one will do what you're looking for. Isolation levels are set for the connection, not for the object.HOWEVER, I have to warn you that setting isolation to read uncommitted is almost always a really, really, really bad idea. There are very, very few situations where it is the correct thing to do, and almost none of them involve repeat-run code like a stored procedure.
What read uncommitted (and the query hint NoLock) do, is allow you to read wrong data from the tables involved. Data that has changed, data that is being deleted, even (in some case) the same data more than one time. The only time to ever use any of these things is when the accuracy of the data doesn't matter at all.
Make sure the business units or customers who will be using the application this stored procedure will be part of, understand that the data it presents to them may be wrong, potentially by an arbitrarily large amount. If they are fine with that, go ahead and use Read Uncommitted. If they ask you if you're crazy when you tell them you want the database to give them the wrong data, or make threatening noises about the potentially short future of your employment, tell them you were just joking and hope that they buy it.
It's your decision, of course. You know what's needed by your business users/customers better than I do, of course. But do take the potentially huge data flaws it will generate into account.
Thanks, and i agree. What would be better option for isolation levels, my goal is to avoid blocking.
November 7, 2012 at 12:21 pm
It depends on what end result you want.
Read Committed Snapshot Isolation will help prevent blocking, but allows "old data" to be seen by concurrent transactions. Doesn't have the "read the same row multiple times" issues that NoLock/Uncommitted can have, but reading pre-update data can be a problem in some situations. Check with business users if that's okay.
"If Bob runs a search on customers, while Joe is adding a customer and Sally is deleting one, Bob will still see the deleted one and won't see the added one, till those finish and he runs the search again. Is that acceptable?"
Usually, it's just fine. It largely parallels sequential blocking anyway. But be sure.
On the other hand, if the main load on a table is just Select statements, with infrequent data changes changes, Selects take Shared Locks anyway, and won't block each other. So the default Read Committed works just fine for that. Selects in that level can be blocked by Insert/Update/Delete operations that lock the scope the Select is supposed to run in, but not by other Selects. That's default behavior.
Tables with a heavy load of concurrent Insert/Update/Delete operations can often get a huge speed increase by using Read Committed Snapshot Isolation. Or even just Snapshot Isolation, but the Read Committed version has some imporant benefits to it. (Bing/Google those, and you'll find the differences easily enough.)
The main drawback is it can add substantially to the load on tempdb. Make sure before you move ahead on RCSI that you're not going to blow up tempdb. 99% of the time, it's not a problem unless your server is already chronically low on drive space.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
November 7, 2012 at 12:43 pm
GSquared (11/7/2012)
It depends on what end result you want.Read Committed Snapshot Isolation will help prevent blocking, but allows "old data" to be seen by concurrent transactions. Doesn't have the "read the same row multiple times" issues that NoLock/Uncommitted can have, but reading pre-update data can be a problem in some situations. Check with business users if that's okay.
"If Bob runs a search on customers, while Joe is adding a customer and Sally is deleting one, Bob will still see the deleted one and won't see the added one, till those finish and he runs the search again. Is that acceptable?"
Usually, it's just fine. It largely parallels sequential blocking anyway. But be sure.
On the other hand, if the main load on a table is just Select statements, with infrequent data changes changes, Selects take Shared Locks anyway, and won't block each other. So the default Read Committed works just fine for that. Selects in that level can be blocked by Insert/Update/Delete operations that lock the scope the Select is supposed to run in, but not by other Selects. That's default behavior.
Tables with a heavy load of concurrent Insert/Update/Delete operations can often get a huge speed increase by using Read Committed Snapshot Isolation. Or even just Snapshot Isolation, but the Read Committed version has some imporant benefits to it. (Bing/Google those, and you'll find the differences easily enough.)
The main drawback is it can add substantially to the load on tempdb. Make sure before you move ahead on RCSI that you're not going to blow up tempdb. 99% of the time, it's not a problem unless your server is already chronically low on drive space.
If i don't use any snapshot option shouldn't it have read committed isolation level anyways?
November 7, 2012 at 12:47 pm
sqldba_newbie (11/7/2012)
GSquared (11/7/2012)
It depends on what end result you want.Read Committed Snapshot Isolation will help prevent blocking, but allows "old data" to be seen by concurrent transactions. Doesn't have the "read the same row multiple times" issues that NoLock/Uncommitted can have, but reading pre-update data can be a problem in some situations. Check with business users if that's okay.
"If Bob runs a search on customers, while Joe is adding a customer and Sally is deleting one, Bob will still see the deleted one and won't see the added one, till those finish and he runs the search again. Is that acceptable?"
Usually, it's just fine. It largely parallels sequential blocking anyway. But be sure.
On the other hand, if the main load on a table is just Select statements, with infrequent data changes changes, Selects take Shared Locks anyway, and won't block each other. So the default Read Committed works just fine for that. Selects in that level can be blocked by Insert/Update/Delete operations that lock the scope the Select is supposed to run in, but not by other Selects. That's default behavior.
Tables with a heavy load of concurrent Insert/Update/Delete operations can often get a huge speed increase by using Read Committed Snapshot Isolation. Or even just Snapshot Isolation, but the Read Committed version has some imporant benefits to it. (Bing/Google those, and you'll find the differences easily enough.)
The main drawback is it can add substantially to the load on tempdb. Make sure before you move ahead on RCSI that you're not going to blow up tempdb. 99% of the time, it's not a problem unless your server is already chronically low on drive space.
If i don't use any snapshot option shouldn't it have read committed isolation level anyways?
Not sure what you're asking.
Read Committed is the default isolation level for SQL Server.
If you set a database to use Read Committed Snapshot Isolation, then that becomes the default. It modifies Read Committed so that it uses snapshot technology as part of the isolation mechanism. That makes it so Insert/Update/Delete operations won't block Selects on the same scope of data. Same behavior as Snapshot Isolation, for the most part, but as the default instead of on a connection-by-connection basis.
If you don't set an isolation level, it will use Read Committed, unless the database has been set for RCSI, and then it will use that instead.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
November 7, 2012 at 1:17 pm
GSquared (11/7/2012)
sqldba_newbie (11/7/2012)
GSquared (11/7/2012)
It depends on what end result you want.Read Committed Snapshot Isolation will help prevent blocking, but allows "old data" to be seen by concurrent transactions. Doesn't have the "read the same row multiple times" issues that NoLock/Uncommitted can have, but reading pre-update data can be a problem in some situations. Check with business users if that's okay.
"If Bob runs a search on customers, while Joe is adding a customer and Sally is deleting one, Bob will still see the deleted one and won't see the added one, till those finish and he runs the search again. Is that acceptable?"
Usually, it's just fine. It largely parallels sequential blocking anyway. But be sure.
On the other hand, if the main load on a table is just Select statements, with infrequent data changes changes, Selects take Shared Locks anyway, and won't block each other. So the default Read Committed works just fine for that. Selects in that level can be blocked by Insert/Update/Delete operations that lock the scope the Select is supposed to run in, but not by other Selects. That's default behavior.
Tables with a heavy load of concurrent Insert/Update/Delete operations can often get a huge speed increase by using Read Committed Snapshot Isolation. Or even just Snapshot Isolation, but the Read Committed version has some imporant benefits to it. (Bing/Google those, and you'll find the differences easily enough.)
The main drawback is it can add substantially to the load on tempdb. Make sure before you move ahead on RCSI that you're not going to blow up tempdb. 99% of the time, it's not a problem unless your server is already chronically low on drive space.
If i don't use any snapshot option shouldn't it have read committed isolation level anyways?
Not sure what you're asking.
Read Committed is the default isolation level for SQL Server.
If you set a database to use Read Committed Snapshot Isolation, then that becomes the default. It modifies Read Committed so that it uses snapshot technology as part of the isolation mechanism. That makes it so Insert/Update/Delete operations won't block Selects on the same scope of data. Same behavior as Snapshot Isolation, for the most part, but as the default instead of on a connection-by-connection basis.
If you don't set an isolation level, it will use Read Committed, unless the database has been set for RCSI, and then it will use that instead.
Actually i did some quick analysis on our environment and i dont know if i should change the isolation level. So we have fileloads which occur at night ( pretty much during off-peak hours) and then the app itself makes very little changes i would say just 10% of overall, so do u still think we need to change it? Any thoughts?
November 7, 2012 at 1:21 pm
Standard, normal, boring Read Committed usually works just fine for that kind of situation.
If you find that the few in-day updates are causing more blocks than you like, switch to RCSI. Not to Read Uncommitted.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
November 7, 2012 at 1:42 pm
GSquared (11/7/2012)
Standard, normal, boring Read Committed usually works just fine for that kind of situation.If you find that the few in-day updates are causing more blocks than you like, switch to RCSI. Not to Read Uncommitted.
The locks are heldup for longer durations, how do i handle that?
November 7, 2012 at 11:49 pm
sqldba_newbie (11/7/2012)
The locks are heldup for longer durations, how do i handle that?
Longer duration than what?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
November 8, 2012 at 7:47 am
GilaMonster (11/7/2012)
sqldba_newbie (11/7/2012)
The locks are heldup for longer durations, how do i handle that?Longer duration than what?
Application is running slow and i do see the calls made by webserver are blocked.
November 11, 2012 at 4:01 pm
The right way to fix this is figure out why you have long running transactions that are blocking reads. In the SQL Server world, we like to see short, quick transactions. If you are having this issue without a high level of concurrency, you should look into your code to see why your transactions are blocking so long. If that doesn't solve it, I agree that read committed snapshot is the right approach. DO NOT use READ UNCOMMITTED.
Viewing 12 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply