September 21, 2012 at 10:48 am
I have a question on Degree of Parallelism. I basically want to know the best practice or guidelines to set the degree of parallelism. I looked at the Microsoft Sites for some guide which I found but only confusing.
So let me start with a simple queston.
If we have set the degree of parallelism to just 1, we suppress the ability of SQL server running query in parallel when it needs to be. Will that impact heavy processing stored procedures or SQLs? Because we do see some issues such as timeouts etc in the batch process we run during the night.
We are perfomance tuning at the same time and doing what we can do with indexes statistics, but still that does not help.
What are the disadvantages of setting the degree of parallelism to 1?
September 21, 2012 at 11:45 pm
Hi
Performance of MaxDop (maximum degree of parallelism) depends on your hardware configuration and if any other applications running on same server too. If you use parallelism, it will introduce some overhead to small queries. Now it depends on how many db instance you have and what type of operations (select queries & insert/update/delete queries) are being executed.
Regarding indexes, if it suites you, you may use maintenance plan to rebuild indexes on suitable intervals. However it may not solve your performance problems always.
Thanks
September 23, 2012 at 8:45 am
The disadvantages of setting Max Degree of Parallelism to 1 is that queries that would have benefited from executing with a parallel plan will not be able to and that will likely reduce performance. In some cases it can make a previously parallelized process run orders of magnitude longer and monopolize a single-CPU at 100% in the process.
Was Max Degree of Parallelism set to 1 on your server or are you exploring the possibility of making a change?
If you're deciding whether to set it to 1 or evaluating whether that was the right decision consider leaving parallelism enabled at the default (i.e. set to 0) but raising the bar for parallel plan to be chosen by changing the Cost Threshold for Parallelism
Also know that with a Max Degree of Parallelism of 1 you can always override the setting for specific queries you know will benefit from executing with a parallel plan using the MAXDOP query hint.
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
September 23, 2012 at 1:25 pm
braju (9/21/2012)
I have a question on Degree of Parallelism. I basically want to know the best practice or guidelines to set the degree of parallelism. I looked at the Microsoft Sites for some guide which I found but only confusing.So let me start with a simple queston.
If we have set the degree of parallelism to just 1, we suppress the ability of SQL server running query in parallel when it needs to be. Will that impact heavy processing stored procedures or SQLs? Because we do see some issues such as timeouts etc in the batch process we run during the night.
We are perfomance tuning at the same time and doing what we can do with indexes statistics, but still that does not help.
What are the disadvantages of setting the degree of parallelism to 1?
This is something i see quite a lot on systems i have encountered over the years.
If you're going to set MAXDOP server wide to 1 why not just rip out all the servers CPUs bar 1 and save yourself some licensing costs too 😀
Rather than restricting the CPU resources, try setting the Cost Threshold for Parallelism to an alternate higher value.
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
September 24, 2012 at 6:18 am
Well, it was set to default one time, but we experienced some timeouts, further investigating the cause of the timeouts, we saw some OLTP processing such as Search stored procedures cause more CXPacket values. so we decided to set it to 1, but that does not help during batch processes in the night time, becasue we get CPU spiked at 100% and make the system unresponsive forother concurrent tasks.
I'm exploring the option of setting back to default (0) at least during the night time 10 PM to 6 AM.
September 24, 2012 at 6:55 am
look closely at the cost threshold parameter and also check for rogue queries and tune them 😉
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
September 24, 2012 at 7:11 am
From what I have seen as reponses, I think we should set the MAXDOP to default to 0 and adjust cost threshold of parallelism if needed or even ahve MAXDOP query hint inside a specific SQL processing. At the same time fine tune the SQL queries that might cause the bottlenecks.
Please note that the SQL server is in VM environment on 32 bit OS server.
Thanks
September 24, 2012 at 7:55 am
braju (9/24/2012)
From what I have seen as reponses, I think we should set the MAXDOP to default to 0 and adjust cost threshold of parallelism if needed or even ahve MAXDOP query hint inside a specific SQL processing. At the same time fine tune the SQL queries that might cause the bottlenecks.
That's what i said above 😉
braju (9/24/2012)
Please note that the SQL server is in VM environment on 32 bit OS server.Thanks
Which version and edition of the Windows OS?
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
September 24, 2012 at 8:13 am
Windows OS - Win 2003 SP2 Enterprise Edition
SQL Server - 2005 Standard Edition.
Our client is plaaning to move onto 64 bit envoronment but still keep the servers in VM.
Thanks
September 24, 2012 at 11:28 am
Perry Whittle (9/24/2012)
braju (9/24/2012)
From what I have seen as reponses, I think we should set the MAXDOP to default to 0 and adjust cost threshold of parallelism if needed or even ahve MAXDOP query hint inside a specific SQL processing. At the same time fine tune the SQL queries that might cause the bottlenecks.That's what i said above 😉
That's funny because I thought the same thing when I read your initial post 😀
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
September 24, 2012 at 12:56 pm
I've wondered at the optimal setting also.
We have a problem with MAXDOP=0 in our environment. We are very lightly loaded at night, and only have a few large queries each night that get parallelized. Sometimes these deadlock on themselves; the message refers to the "communication buffer," and show ups with CXPACKET waits.
Doesn't happen all the time, but often enough to be a problem.
Here's what I've found searching the web: When the query analyzer decides to parallelize a query, it sets the number of parallel threads to the current number of available CPUs, up to MAXDOP. When a query gets parallelized with no other activity present, SQL creates a parallelized thread for each available CPUs. However, the query will also have at least one "feeder" thread, and one "collector" thread (my terminology, not MS's). These SQL threads are non-preemptive and not time-sliced, so apparently the communications buffer fills up and blocks the active threads that are filling it, and the collector thread never gets a chance to execute and empty the buffer, so the whole query deadlocks.
Also, once this query starts, it will prevent any other queries from running until after this one query finishes, or gets killed because of a deadlock.
So far, we've just put a MAXDOP=1 query hint on the queries that have failed. Occassionally I've set the server to MAXDOP=1 while running a vendor provided upgrade script that caused this problem in the past. Just hate to change a server-wide configuration setting without knowing more about the effects and testing it.
So I suggest that you should set MAXDOP to at most total "usable CPUs" minus 2 (or =1 if you have fewer than 3 CPUs).
I say "usable CPUs" because of CPU affinity masks, NUMA, and hyper-threaded CPUs impact this depending on version of SQL and Windows. Others know and have written a lot more about this than me.]
This only happens on large queries, that would definitely beneffit from parallel execution. So it isn't a matter of adjusting the threshold.
Has anyone else run into this? Any advice for addressing it?
David Lathrop
DBA
WA Dept of Health
September 24, 2012 at 1:08 pm
DLathrop (9/24/2012)
I've wondered at the optimal setting also.We have a problem with MAXDOP=0 in our environment. We are very lightly loaded at night, and only have a few large queries each night that get parallelized. Sometimes these deadlock on themselves; the message refers to the "communication buffer," and show ups with CXPACKET waits.
Doesn't happen all the time, but often enough to be a problem.
Here's what I've found searching the web: When the query analyzer decides to parallelize a query, it sets the number of parallel threads to the current number of available CPUs, up to MAXDOP. When a query gets parallelized with no other activity present, SQL creates a parallelized thread for each available CPUs. However, the query will also have at least one "feeder" thread, and one "collector" thread (my terminology, not MS's). These SQL threads are non-preemptive and not time-sliced, so apparently the communications buffer fills up and blocks the active threads that are filling it, and the collector thread never gets a chance to execute and empty the buffer, so the whole query deadlocks.
Also, once this query starts, it will prevent any other queries from running until after this one query finishes, or gets killed because of a deadlock.
So far, we've just put a MAXDOP=1 query hint on the queries that have failed. Occassionally I've set the server to MAXDOP=1 while running a vendor provided upgrade script that caused this problem in the past. Just hate to change a server-wide configuration setting without knowing more about the effects and testing it.
So I suggest that you should set MAXDOP to at most total "usable CPUs" minus 2 (or =1 if you have fewer than 3 CPUs).
I say "usable CPUs" because of CPU affinity masks, NUMA, and hyper-threaded CPUs impact this depending on version of SQL and Windows. Others know and have written a lot more about this than me.]
This only happens on large queries, that would definitely beneffit from parallel execution. So it isn't a matter of adjusting the threshold.
Has anyone else run into this? Any advice for addressing it?
I had several queries that intermittently suffered from intra-query parallelism deadlocks on SQL 2005 not long ago and used a query hint to force the query to use MAXDOP 1. I could have achieved the same thing by changing the Cost Threshold to raise the bar above what the queries required. Adding the query hint resolved the issue for those specific queries. CXPACKET waits are not the sole indicator to use when deciding to change the Max Degree of Parallelism.
Some of the nightly processes went from a few minutes to around 10 but it was not a big deal as they were nightly processes. Adding a query hint was simpler for the customer to "test" than another option presented which was to rewrite the process with a 'divide and conquer' approach, so they asked me to implement the query hint. In this case I did not have to resort to changing the Max Degree of Parallelism or the Cost Threshold instance-level settings and the impact on the system was extremely targeted to change just the problem queries.
When I upgraded these databases to 2008 R2 I removed the query hints and the intra-query parallelism deadlocks have not resurfaced.
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
September 24, 2012 at 1:29 pm
Good responses. Thanks so much for giving all the inputs. I thought I would go ahead with setting the MAXDOP = 0 the default value and then adjust the Cost Threshhold if needed. Also fine tune the queries which we have been doing.
BUt after Dave's post who suggested to set to maximum number of CPUs - 2, I'm back to square one. What I'm looking for good guideline that I can recommend , because unfortunately 'It depends" which is usual answer for most of SQL server questions does not work with my clients.
Another important thing is that our SQL server has got only two CPUs. So that means what we have setup the MAXDOP value to is what Dave is suggesting, that is setting it to 1.
But somehow, the long runnning batch process does not work good consistently. This batch program processes 250000+ records. We know pretty well it is going to be long running process.
Thanks Again
September 24, 2012 at 1:51 pm
We get various dials and switches to tweak and adjust, it's up to you to apply them appropriately for your system:
- MAXDOP query hint
- Max Degree of Parallelism
- Cost Threshold for Parallelism
- rewrite your queries to do smaller chunks of work as not to monopolize a set of CPUs
- add indexes to make processes faster or possibly affect execution plans to avoid parallelism
- hybrid approaches
You cannot find a Best Practice or even a strict Guideline because It Depends 😀
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
September 24, 2012 at 3:12 pm
I aggree with opc.three.
As I said, we've just added MAXDOP=1 query hints when a query has failed at night more than once. It has only been a couple so far. It was the quick fix that was easy to test and minimal chance of impact to anything else.
So far, the problem has only been on our old SQL 2005 boxes, which SQL Server sees as having four processor SMP. We haven't migrated any big batch queries to SQL 2008 R2 yet; it might not have the same problem because these newer boxes have 16 processors and they're split into two NUMA nodes. SQL Server might handle this differently. We'll see.
Based on my experience and what I've learned about the parallelization, on a machine already set to MAXDOP=1 and just two CPUs, I'ld leave it set that way. MAXDOP=0 won't be able to do much with a single query anyway. Having two different long running queries executing simultaneously would probably be more efficient, and MAXDOP=1 will let this happen.
Bad SQL is bad SQL and should be fixed, but some batch queries are just big hairy monsters by nature. You'll probably never be able to out-guess the query optimizer, so wait and see if there's actually a problem; only break them up or add the query hints when you need to. Normally I'ld only change the server MAXDOP=0 setting if you have to add the hint to a lot of queries, or you get some really definitive, expert advice to change it.
David Lathrop
DBA
WA Dept of Health
Viewing 15 posts - 1 through 15 (of 15 total)
You must be logged in to reply to this topic. Login to reply