June 17, 2009 at 7:36 am
Hi All,
I am having a DB which is heavily load and lock when i have lot of user trying to insert or update records in my DB.
Every store procedure i use for working with update and insert of bunch of data, i used begin transaction
and 1 of my store procedure lock and process too long, my DB is blocking other users for going into my DB.
So if i use SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED everytime i begin transaction, will it solve my blocking problem ?
Can SET TRANSACTION ISOLATION LEVEL be used after i begin transaction, or i must use it before begin transaction, and after begin transaction can i still change the ISOLATION LEVEL ??
please help...
jason
June 17, 2009 at 7:48 am
Read uncommitted only affects reads, that is selects. Data changes will always lock. Also, be aware of what read uncommitted actually is. See - http://sqlblog.com/blogs/andrew_kelly/archive/2009/04/10/how-dirty-are-your-reads.aspx
I would suggest having a look at the indexes on those tables and on the queries, make sure that they are as optimal as possible. Queries that run quickly don't lock for long and that may reduce blocking.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
June 17, 2009 at 9:30 am
Gail is 100% correct.
You could look into using snapshot isolation as a way to reduce locking issues. It increases the load on tempdb and some on the CPU, but it should lead to quite a lot less blocking.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
June 17, 2009 at 6:22 pm
thanks , i will give it a try...
By the way,
How to optimize tempdb if i want to use Snapshot Isolation
June 17, 2009 at 9:18 pm
if you follow the ideas from Gail and Grant together , you are done .
Abhay Chaudhary
Sr.DBA (MCITP/MCTS :SQL Server 2005/2008 ,OCP 9i)
June 18, 2009 at 5:38 am
jason (6/17/2009)
thanks , i will give it a try...By the way,
How to optimize tempdb if i want to use Snapshot Isolation
There are several white papers on optimizing tempdb available on Microsoft's MSDN web site. Search on 'optimize tempdb.' It's largely a question of seperating tempdb storage from the rest of the system, making sure it's on an appropriate RAID system (if I recall correctly 0 or 10, certainly not 5), having an appropriate number of files associated (see the white papers), that sort of thing.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
June 19, 2009 at 7:10 am
We resolved our deadlocks problems by setting the following
ALTER DATABASE @dbname SET ALLOW_SNAPSHOT_ISOLATION ON;
ALTER DATABASE @dbname SET READ_COMMITTED_SNAPSHOT ON;
June 19, 2009 at 6:38 pm
Will this be an issue when using Read_Committed_Snapshot ?
And also 1 question,
I have 10 records in my table , and i begin transaction and start update 1 of the row , before commit the transaction, another user is reading another row of data, will it be block by the update ?
June 19, 2009 at 8:53 pm
ALTER DATABASE @dbname SET ALLOW_SNAPSHOT_ISOLATION ON;
ALTER DATABASE @dbname SET READ_COMMITTED_SNAPSHOT ON;
After i set these 2 option, do i still need to use SET Transaction Isolation Level Snapshot on each of my statement ?? Or it will automatically use it
June 19, 2009 at 10:03 pm
jason (6/19/2009)
It depends. The row-versioning isolation levels do come with a cost, the question is whether the improvement in concurrency outweighs this. It is difficult to generalize because there are so many issues involved. A system which is experiencing excessive blocking due to concurrent read and write activity, where reads are more common that writes, and where long-running transactions are the exception - will often be a good candidate for a row-versioning isolation level.
jason (6/19/2009)
And also 1 question,I have 10 records in my table , and i begin transaction and start update 1 of the row , before commit the transaction, another user is reading another row of data, will it be block by the update ?
Short answer: no.
Long answer: I think you need to do some more reading on this subject before you change anything. The point of row-level versioning is that shared locks are not (generally) taken by read operations. In your example, the update will take an update lock which is converted to an exclusive lock just before the data is changed. Since the data reader just needs a schema-stability lock, it will not block waiting to acquire an incompatible shared lock.
Paul
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
June 19, 2009 at 10:08 pm
jason (6/19/2009)
ALTER DATABASE @dbname SET ALLOW_SNAPSHOT_ISOLATION ON;ALTER DATABASE @dbname SET READ_COMMITTED_SNAPSHOT ON;
After i set these 2 option, do i still need to use SET Transaction Isolation Level Snapshot on each of my statement ?? Or it will automatically use it
There are two different types of row-level versioning available.
READ_COMMITTED_SNAPSHOT, once enabled on the database, automatically applies to transactions running at the default READ_COMMITTED isolation level. You do not need to change anything else.
The SNAPSHOT isolation level is different, and requires you to explicitly change the isolation level to take advantage of it.
See http://msdn.microsoft.com/en-us/library/cc917674.aspx and http://www.sqlservercentral.com/articles/SQL+Server+2005/62464/
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply