April 14, 2009 at 9:53 am
Our application has almost no updates to most tables. Only Inserts and Reads. I changed each query to use "SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED" instead of using WITH (NOLOCK) on each table. Is there any real difference in doing this?
I currently see many spids blocking their own commands and going into a suspended state. I am currently trying to find a solution. Is there a better solution than the one we currently use?
April 14, 2009 at 11:00 am
My understading is that these are the same.
April 14, 2009 at 11:46 am
Processes normally block and lock resources as they work. If you're hitting excessive blocking & locking than you should address the code and the structure, not using query & table hints. Reading uncommitted data will lead to bad data being returned. Not simply data is in the process of being updated, but you'll get extra rows or miss rows as the data changes under your query. Extra values or missing values are going to seriously negatively impact most businesses.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
April 14, 2009 at 11:48 am
ReadUncommitted and NOLOCK have the same effect. Only difference is that specifying SET TRANSACTION ISOLATION LEVEL READUNCOMMITTED affects every single table within a select statement until the connection closes or a different isolation level is specified. NOLOCK applies just to the table the hint follows.
Maybe I'm missing something, but if you have virtually no updates, you shouldn't have any blocking. Select statements don't block other select statements, unless strange locking hints are applied. What's the cause of the blocking?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
April 14, 2009 at 12:29 pm
If you are seeing processes block themselves, you need to review what type of waits you are encountering. My guess would be that they are CXPACKET waits - which means the queries are using parallel plans.
To reduce the parallel plans and blocking you need to focus on fixing the queries.
You can also reduce the impact by setting the max degree of parallelism setting for the server to 1/2 the number of cores. Reduce even further if you are still seeing a lot of parallel plans and they are causing issues.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
April 15, 2009 at 9:38 am
Thanks for the input everyone.
April 29, 2009 at 2:25 pm
can we set read uncommitted isolation level by default on server so that every query will use that isolation level instead of explicitly giving in the query.
April 29, 2009 at 2:40 pm
Asked and (partially) answered here
http://www.sqlservercentral.com/Forums/Topic707203-146-1.aspx
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 27, 2009 at 7:43 am
I was just wondering how do i use READ UNCOMMITTED isolation level when i do bulk/fast load using ssis packages, as we have to import our production data (no updates, only inserts happen) into a different server. I want to use this isolation because i see a lot of improvement in perofrmance too, may be bacause they dont need to wait for loack for reading data from production.
May 27, 2009 at 7:46 am
NOLOCK/READ UNCOMMITTED only apply to selects. All data modifications will ignore that hint, they have to lock exclusive.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 29, 2009 at 8:03 pm
If you are seeing processes block themselves, you need to review what type of waits you are encountering. My guess would be that they are CXPACKET waits - which means the queries are using parallel plans.
To reduce the parallel plans and blocking you need to focus on fixing the queries.
You can also reduce the impact by setting the max degree of parallelism setting for the server to 1/2 the number of cores. Reduce even further if you are still seeing a lot of parallel plans and they are causing issues.
Hi i just started monitoring a new sql server and see tons on the CXPACKET - all 8 cpu are in use.
I forget which options to uncheck in sql manager - there two columns for CPU - should i change to the 4 and see if still get the CXPACKET.
June 1, 2009 at 10:27 am
You can also reduce the impact by setting the max degree of parallelism setting for the server to 1/2 the number of cores. Reduce even further if you are still seeing a lot of parallel plans and they are causing issues.
How do i do this at database level or server level, yes as you said almost all of the queries executed have CXPACKET waits.
Here is my server settings where i have 7 CPU's
Cost Threshold for parellelism = 5
Locks = 0
Max Degree pf Parellelism = 0
Query Wait = -1
June 1, 2009 at 10:53 am
Cost Threshold for parellelism = 5
The above setting tells SQL Server that any plans that will exceed that value (estimated), then a parallel plan will be used.
Max Degree pf Parellelism = 0
This property tells SQL Server how many processors to use when executing a parallel plan. A zero in this property tells SQL Server to use all available processors.
I would start by setting this property to 4 which is half the number of processors you have on your system. Review the wait stats after making this change and see if the CXPACKET waits drop. If they don't drop enough and you don't see an improvement in performance - try changing it to 2. I would be careful with this setting and monitor performance constantly until you are seeing the results you expect.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
June 1, 2009 at 12:01 pm
why do i need to limit the number of processors when i have 7 CPUs i want the job to use as many as it requires and finish the job very fast.
June 1, 2009 at 12:12 pm
You don't want to use all processor for several reasons. One reason is that you have multiple queries all using parallel plans, the system is going to take longer to process each query. Additionally, escalating to parallel plans may not actually be the best plan for that query. Those queries will take longer to process than the non parallel plan.
On some busy OLTP systems disabling parallel processing has shown a decrease in processing time and reduced performance issues/problems.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
Viewing 15 posts - 1 through 14 (of 14 total)
You must be logged in to reply to this topic. Login to reply