February 5, 2019 at 6:54 am
MJ-1115929 - Tuesday, February 5, 2019 12:56 AMMJ-1115929 - Monday, February 4, 2019 12:27 AMfrederico_fonseca - Sunday, February 3, 2019 2:57 PMJust to list the options.
to use the old cardinality estimator (CE 70) the following options are available - final result is the same insofar as it relates to CEALTER DATABASE SCOPED CONFIGURATION SET LEGACY_CARDINALITY_ESTIMATION = ON; - available if on 2016 SP1 or higher - this is the best option as it keeps all other 2016 goodies
Trace Flag 9481 - less desired if set at server level - can be used at query level
database compatibility level lower than 120 - looses the remaining options available at higher db levelsif either of the above is set and the queries are still slow then other things are affecting the query - Max Dop, Cost Threshold, number of cores, memory (both max memory and memory availability vs old system), index/column stats. This assumes same volume of data.
I would also check traceflags set globaly on newer vs old server - never know if something was done (or not done) at this levelDear frederico
thanks for the pointing workaround, we will try this config and post back the results here
once again thanks
regards
MJDear Jeff/Frederico,
We have tried the legacy setting with compatibility mode changed to 2016, but there was no much difference, result of the queries took the same amount of time. MAX DOP is set to 4 and cost of threshold is 5, what are the other areas to cover , please advise.thanks
MJ
Start by changing the threshold to 25 or 30.
--Jeff Moden
Change is inevitable... Change for the better is not.
February 5, 2019 at 7:06 am
Jeff Moden - Tuesday, February 5, 2019 6:54 AMStart by changing the threshold to 25 or 30.
He, he, I would start somewhere between 750 and 1000, remember that those are not relative figures but actual "cost" figures 😉
😎
The maximum value is 32767 and funny enough, even at that setting, things can still go parallel.
February 6, 2019 at 4:07 am
Eirikur Eiriksson - Tuesday, February 5, 2019 7:06 AMJeff Moden - Tuesday, February 5, 2019 6:54 AMStart by changing the threshold to 25 or 30.
He, he, I would start somewhere between 750 and 1000, remember that those are not relative figures but actual "cost" figures 😉
😎
The maximum value is 32767 and funny enough, even at that setting, things can still go parallel.
Dear Jeff and Erikur
Will try both 25/30 and 750/1000 and check for the performance and will revert back the results here
thanks
MJ
February 6, 2019 at 4:08 am
Jeff Moden - Tuesday, February 5, 2019 6:54 AMMJ-1115929 - Tuesday, February 5, 2019 12:56 AMMJ-1115929 - Monday, February 4, 2019 12:27 AMfrederico_fonseca - Sunday, February 3, 2019 2:57 PMJust to list the options.
to use the old cardinality estimator (CE 70) the following options are available - final result is the same insofar as it relates to CEALTER DATABASE SCOPED CONFIGURATION SET LEGACY_CARDINALITY_ESTIMATION = ON; - available if on 2016 SP1 or higher - this is the best option as it keeps all other 2016 goodies
Trace Flag 9481 - less desired if set at server level - can be used at query level
database compatibility level lower than 120 - looses the remaining options available at higher db levelsif either of the above is set and the queries are still slow then other things are affecting the query - Max Dop, Cost Threshold, number of cores, memory (both max memory and memory availability vs old system), index/column stats. This assumes same volume of data.
I would also check traceflags set globaly on newer vs old server - never know if something was done (or not done) at this levelDear frederico
thanks for the pointing workaround, we will try this config and post back the results here
once again thanks
regards
MJDear Jeff/Frederico,
We have tried the legacy setting with compatibility mode changed to 2016, but there was no much difference, result of the queries took the same amount of time. MAX DOP is set to 4 and cost of threshold is 5, what are the other areas to cover , please advise.thanks
MJStart by changing the threshold to 25 or 30.
Sure will give try the numbers and revert.
thanks
MJ
February 6, 2019 at 5:37 pm
have you run a "sp_configure" against both machines (with advanced options, of course) and compared?
ensure everything makes sense
February 7, 2019 at 6:46 am
MJ-1115929 - Wednesday, February 6, 2019 4:07 AMEirikur Eiriksson - Tuesday, February 5, 2019 7:06 AMJeff Moden - Tuesday, February 5, 2019 6:54 AMStart by changing the threshold to 25 or 30.
He, he, I would start somewhere between 750 and 1000, remember that those are not relative figures but actual "cost" figures 😉
😎
The maximum value is 32767 and funny enough, even at that setting, things can still go parallel.Dear Jeff and Erikur
Will try both 25/30 and 750/1000 and check for the performance and will revert back the results here
thanks
MJ
I've found that going that high on my systems can really put the kabosh on performance (we have a bunch of legacy code that really could use some help). During such a change, you might want to be actively watching PerfMon like I did. As with all else in SQL Server, "It Depends".
--Jeff Moden
Change is inevitable... Change for the better is not.
February 7, 2019 at 7:48 am
Jeff Moden - Thursday, February 7, 2019 6:46 AMMJ-1115929 - Wednesday, February 6, 2019 4:07 AMEirikur Eiriksson - Tuesday, February 5, 2019 7:06 AMJeff Moden - Tuesday, February 5, 2019 6:54 AMStart by changing the threshold to 25 or 30.
He, he, I would start somewhere between 750 and 1000, remember that those are not relative figures but actual "cost" figures 😉
😎
The maximum value is 32767 and funny enough, even at that setting, things can still go parallel.Dear Jeff and Erikur
Will try both 25/30 and 750/1000 and check for the performance and will revert back the results here
thanks
MJI've found that going that high on my systems can really put the kabosh on performance (we have a bunch of legacy code that really could use some help). During such a change, you might want to be actively watching PerfMon like I did. As with all else in SQL Server, "It Depends".
In my experience, finding the code's parallelism threshold is very useful, one can of course go to the execution plan and pick up the cost figure, but bear in mind that the CTFP figure will affect the optimizer. I'm not saying that CTFP should be excessive but knowing where the cut-off is can be very helpful.
😎
Another option is simply to run the code with OPTION (MAXDOP [N]) and assess the differences between N(min) and N(max) 😉
Viewing 7 posts - 16 through 21 (of 21 total)
You must be logged in to reply to this topic. Login to reply