Max. Degree or Parallelism

  • Afternoon All,

    Sometimes setting MAXDOP 1 for a query results in a faster execution. Is this simply because the process of re-constructing the various split up streams is a costly operation and sometimes the cost of re-assembly outweigh the benefits of splitting up the work to begin with?

    If that assumption is anywhere near true is there a way to tell in advance if there is a likely bad parallel performer? I.e. does it matter how many tables/joins are involved?

    Cheers
    Alex

  • Alex

    Yes, that's right.  Imagine you have five dishes to wash.  You'd do them yourself because it's quicker, even if you had friends sitting around doing nothing.  Now imagine you have 500 dishes.  It would be quicker to get your friends to help you now.  It's the same with parallelism.  Unfortunately there's no scientific method of determining at what point the overheads of parallelism are worth the time saving - you have to use trial and error.  A good place to start is to set maxdop somewhere between 30 and 50 - the default of 5 is far too low for modern workloads and machines.

    John

  • John Mitchell-245523 - Monday, July 17, 2017 9:30 AM

    Alex

    Yes, that's right.  Imagine you have five dishes to wash.  You'd do them yourself because it's quicker, even if you had friends sitting around doing nothing.  Now imagine you have 500 dishes.  It would be quicker to get your friends to help you now.  It's the same with parallelism.  Unfortunately there's no scientific method of determining at what point the overheads of parallelism are worth the time saving - you have to use trial and error.  A good place to start is to set maxdop somewhere between 30 and 50 - the default of 5 is far too low for modern workloads and machines.

    John

    "A good place to start is to set maxdop somewhere between 30 and 50 - the default of 5 is far too low for modern workloads and machines." - I guess you meant "Cost threshold for parallelism" not MAXDOP when referring 5,30-50?

  • Brilliant John, cheers!

  • Sreekanth B - Monday, July 17, 2017 9:47 AM

    John Mitchell-245523 - Monday, July 17, 2017 9:30 AM

    Alex

    Yes, that's right.  Imagine you have five dishes to wash.  You'd do them yourself because it's quicker, even if you had friends sitting around doing nothing.  Now imagine you have 500 dishes.  It would be quicker to get your friends to help you now.  It's the same with parallelism.  Unfortunately there's no scientific method of determining at what point the overheads of parallelism are worth the time saving - you have to use trial and error.  A good place to start is to set maxdop somewhere between 30 and 50 - the default of 5 is far too low for modern workloads and machines.

    John

    "A good place to start is to set maxdop somewhere between 30 and 50 - the default of 5 is far too low for modern workloads and machines." - I guess you meant "Cost threshold for parallelism" not MAXDOP when referring 5,30-50?

    I believe that John got confused for a moment by the terms. It's true that there's no scientific method to define the Cost threshold for parallelism. However, there are guidelines for MAXDOP settings to avoid some problems. https://support.microsoft.com/en-us/help/2806535/recommendations-and-guidelines-for-the-max-degree-of-parallelism-confi
    Both default values must be changed on every installation of SQL Server

    Luis C.
    General Disclaimer:
    Are you seriously taking the advice and code from someone from the internet without testing it? Do you at least understand it? Or can it easily kill your server?

    How to post data/code on a forum to get the best help: Option 1 / Option 2
  • Hi Luis,

    I was asking if there is a way to determine from looking at a query or its execution plan if one can tell if any improvement would be seen by setting maxdop lower or to 1.

    I have some queries that definitely run faster when MAXDOP 1 is set (PER QUERY not per server instance) and I was wondering if I could take a more educated guess at when to specify MAXDOP rather than just running with and without the hint.

  • Luis Cazares - Monday, July 17, 2017 10:09 AM

    Sreekanth B - Monday, July 17, 2017 9:47 AM

    John Mitchell-245523 - Monday, July 17, 2017 9:30 AM

    Alex

    Yes, that's right.  Imagine you have five dishes to wash.  You'd do them yourself because it's quicker, even if you had friends sitting around doing nothing.  Now imagine you have 500 dishes.  It would be quicker to get your friends to help you now.  It's the same with parallelism.  Unfortunately there's no scientific method of determining at what point the overheads of parallelism are worth the time saving - you have to use trial and error.  A good place to start is to set maxdop somewhere between 30 and 50 - the default of 5 is far too low for modern workloads and machines.

    John

    "A good place to start is to set maxdop somewhere between 30 and 50 - the default of 5 is far too low for modern workloads and machines." - I guess you meant "Cost threshold for parallelism" not MAXDOP when referring 5,30-50?

    I believe that John got confused for a moment by the terms. It's true that there's no scientific method to define the Cost threshold for parallelism. However, there are guidelines for MAXDOP settings to avoid some problems. https://support.microsoft.com/en-us/help/2806535/recommendations-and-guidelines-for-the-max-degree-of-parallelism-confi
    Both default values must be changed on every installation of SQL Server

    Yes, quite right - I combined the answers to two questions (one of which wasn't even asked) and ended up talking gibberish!

    alex.sqldba - Monday, July 17, 2017 11:43 AM

    Hi Luis,

    I was asking if there is a way to determine from looking at a query or its execution plan if one can tell if any improvement would be seen by setting maxdop lower or to 1.

    I have some queries that definitely run faster when MAXDOP 1 is set (PER QUERY not per server instance) and I was wondering if I could take a more educated guess at when to specify MAXDOP rather than just running with and without the hint.

    Not really - trial and error, as I said.  But if you get your cost threshold right, fiddling with maxdop for individual queries should become the exception rather than the rule.

    John

  • alex.sqldba - Monday, July 17, 2017 11:43 AM

    Hi Luis,

    I was asking if there is a way to determine from looking at a query or its execution plan if one can tell if any improvement would be seen by setting maxdop lower or to 1.

    I have some queries that definitely run faster when MAXDOP 1 is set (PER QUERY not per server instance) and I was wondering if I could take a more educated guess at when to specify MAXDOP rather than just running with and without the hint.

    Should I assume that the configuration for the instance is correct? That's the first step to avoid having to use MAXDOP per query.
    Have you tested with other values such as 2 or 4? Sometimes the solution is not to avoid parallelism altogether, just limit the amount of threads.
    If the problem is the number of cores used by queries, then you should lower your MAXDOP setting. If the problem is parallelism as a whole, then you should raise you Cost threshold.
    As mentioned by John, query hints should be an exception and not a rule.

    Luis C.
    General Disclaimer:
    Are you seriously taking the advice and code from someone from the internet without testing it? Do you at least understand it? Or can it easily kill your server?

    How to post data/code on a forum to get the best help: Option 1 / Option 2
  • The configuration for the instance is correct in as much as out of 600 or so different queries, only a handful are helped by lowering the maxdop value.

    Which is what prompted my question about being able to identify (or have some kind of expectation) in advance rather than just waiting and seeing.

    Also it doesn't sound very good, feeding back to other developers that you just have to use trial and error.

  • alex.sqldba - Tuesday, July 18, 2017 5:57 AM

    The configuration for the instance is correct in as much as out of 600 or so different queries, only a handful are helped by lowering the maxdop value.

    Which is what prompted my question about being able to identify (or have some kind of expectation) in advance rather than just waiting and seeing.

    Also it doesn't sound very good, feeding back to other developers that you just have to use trial and error.

    A way to know is when a query is generating too many CXPACKET waits. I'm not sure if there's a way to predict the problem. The behavior could also change from one environment to the other.

    Luis C.
    General Disclaimer:
    Are you seriously taking the advice and code from someone from the internet without testing it? Do you at least understand it? Or can it easily kill your server?

    How to post data/code on a forum to get the best help: Option 1 / Option 2
  • alex.sqldba - Tuesday, July 18, 2017 5:57 AM

    Also it doesn't sound very good, feeding back to other developers that you just have to use trial and error.

    Indeed it doesn't.  They should already know and understand that.  And the reason that you monitor your systems for the worst performing queries is to capture such outliers.  Unfortunately there's no magic formula!

    John

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply