October 3, 2022 at 2:54 pm
Migration of simple SPs from 120 to 150 COMPATIBILITY_LEVEL issue.
Execution time for the first time run of any SP is close to 3 min.
After the first execution, it takes ms to repeated execution.
The execution plan is identical in 120 and 150. Recompile did not help.
SPs are sitting on wait: PAGEIOLATCH_SH for an extensive time.
Server: AWS r6i.16xlarge, io2, IOPS 4000.
Can you please let me know what I can do to eliminate the first-time run issue?
October 3, 2022 at 2:57 pm
If you switch the level back to 120, this stops happening?
'First' = first ever, or first since instance was started, or first time in current day?
The absence of evidence is not evidence of absence
- Martin Rees
The absence of consumable DDL, sample data and desired results is, however, evidence of the absence of my response
- Phil Parkin
October 3, 2022 at 3:01 pm
First time after a change to 150
October 3, 2022 at 3:02 pm
It is run fine as soon as change back to 120
October 3, 2022 at 3:37 pm
Are you on the latest CU(patch-level)?
Have you verified that the threshold for parallelism setting is not too low?
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
October 3, 2022 at 3:46 pm
I am on CU16
the threshold for parallelism =50
October 3, 2022 at 5:52 pm
You're good there, so that's not part of the issue.
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
October 3, 2022 at 8:42 pm
Generally you see that kind of behavior when SQL Server has to load the data from disk into the buffer cache. Are you - by chance - restarting SQL Server between changes to the compatibility level?
If not - wonder if this might be related to the cardinality estimator changes between the versions. Do you have any trace flags set on that instance, or any database scoped configuration settings?
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
October 4, 2022 at 12:43 am
I do not restart the SQL server in between compatibility level changes. There are no server-level flags.
Additionally, I tested restore DB on a different SQL server with most of the default configuration and had the same issue.
Out of 6, SPs tested, one has OPTION (QUERYTRACEON 9481), but it still did not help.
October 4, 2022 at 6:48 am
The execution plan is identical in 120 and 150.
You say the plans at 120 and 150 are identical but have you verified this by saving both Actual (not estimated) plans as XML and doing a file compare?
ps Do these SPs have parameters? If they do you could have parameter sniffing problems in which case do the compare with OPTION (OPTIMIZE FOR UNKNOWN) on each query.
October 4, 2022 at 2:04 pm
I tested with SET SHOWPLAN_ALL ON/OFF.
All SPs have Parameters, 5 have User Define Table Type, and 1 with just Varchar.
I tried using OPTION (OPTIMIZE FOR UNKNOWN), and the same result
October 4, 2022 at 2:10 pm
Additionally, I used sp_whoisactive @get_plans=1 to compare plans as well
October 4, 2022 at 5:24 pm
I collect the actual execution plan from sp_whoisactive @get_plans=1 during execution.
October 4, 2022 at 6:17 pm
This doesn't sound right - there has to be something else affecting the query.
If I understand what you are stating:
During that first time it runs you have identified the wait as being PAGEIOLATCH_IO which indicates clearly that the process is waiting on a read from disk into memory. If that is truly the wait associated then it also clearly indicates that the data used by these procedures has been flushed from the buffer pool.
I am not aware of anything that states the buffer pool is flushed when the compatibility level is changed, so I wouldn't think that is the cause. Do these procedures happen to use either a table-value function or table variables, which could change the execution plan because of better estimates.
Unfortunately - without actually seeing any code there really isn't much more that can be done in a forum. My guess is that there are probably several opportunities in that code for improvement and that is where I would be focused.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
Viewing 15 posts - 1 through 15 (of 17 total)
You must be logged in to reply to this topic. Login to reply