May 11, 2021 at 3:22 pm
CPU Usage was running very high than normal. Identified the top query, so removing the bad execution plan has resolved the problem. However, my understanding is the issue can re occur again. I am thinking it might be sniffing issue and it is a vendor product so don't have much room to tune the query. In order to resolve this permanently, I was thinking to force the query in the query store with good plan rather than scheduling to clear the cache for the query. Do you agree with that? Any other thoughts? Thanks in Advance!
May 11, 2021 at 5:55 pm
No... at least not yet. Have you done the most needy thing that's usually a problem with 3rd party databases? That "most needy" thing is to update statistics that need it will a FULL Scan.
--Jeff Moden
Change is inevitable... Change for the better is not.
May 11, 2021 at 10:02 pm
So would you recommend to run update statistics with full scan for the whole database weekly instead of normal update statistics? One thing I noticed when I compared both execution plan that the bad plan has cost of sort operator is high.
May 11, 2021 at 10:22 pm
Alternatively, you can experiment with OPTION(RECOMPILE) in individual queries within the SP
or WITH RECOMPILE for entire SP. In one case recently not using any cached exec plans but recompiling using one of these options did help us to get the SP run faster and consuming less resources altogether. While either of 3 saved plans were not good enough.
Likes to play Chess
May 12, 2021 at 4:05 pm
If it is related to parameter sniffing problem (which it sounds like to me, but I also would not be surprised if I was WAY off on this), adding the query hint OPTIMIZE FOR UNKNOWN may help too. Or if you know a "known good" value to optimize for, putting that value in should help.
I find that if a RECOMPILE hint is required to make it run nicely, OPTIMIZE FOR UNKNOWN gives the same benefit without the added overhead of recompiling the query with each run.
The above is all just my opinion on what you should do.
As with all advice you find on a random internet forum - you shouldn't blindly follow it. Always test on a test server to see if there is negative side effects before making changes to live!
I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.
May 12, 2021 at 6:47 pm
So would you recommend to run update statistics with full scan for the whole database weekly instead of normal update statistics? One thing I noticed when I compared both execution plan that the bad plan has cost of sort operator is high.
That's actually kind of two separate subjects with a bit of a mix between the two at the sort operator you speak of.
For my databases, I've adopted the "If it moves, shoot it" philosophy for statistics updpates. In other words, if the RowModCntr is not zero and the stats haven't been updated in the last 23 hours, I update the individual stats with a FULL SCAN. I do that every night on my most active databases and once a week on those that are less active. The reason why I've adopted such an aggressive policy is because almost all of the tables use an IDENTITY column as the Clustered Index (ok in many cases but not in all) and queries that use such "ever-increasing" indexes can really benefit for fresh-stats.
As for the sort operator, I'd definitely be looking at that to see if I could eliminate it somehow OR find a good execution plan and "lock it in" (although that's not my first choice). The other thing to do is to.
My best advice would be to rewrite the offending query to not have such a problem and then hound the 3rd party vendor to let you incorporate it.
--Jeff Moden
Change is inevitable... Change for the better is not.
May 12, 2021 at 7:01 pm
So would you recommend to run update statistics with full scan for the whole database weekly instead of normal update statistics? One thing I noticed when I compared both execution plan that the bad plan has cost of sort operator is high.
It depends - on a lot of factors. What may be needed is an update statistics on one or more tables in that query every day with a full scan. It may require more frequent updates throughout the day if the stats are selected for auto update - which uses a sampling rate.
It might be okay to update once a week - again, it depends.
You also have to look at index rebuild operations - how often are those tables rebuilt? Do you use reorganize for the index - if so do you also update statistics after the reorganize? Note: there are many reason not to use reorganize - rebuild is much better.
Much more analysis will need to be done...
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
May 12, 2021 at 7:11 pm
For those recommending modifying the code - this is 3rd party vendor code and modifying that code without the vendors permission could invalidate the contract. Some vendors will allow customizations - some will not - so make sure you understand what can and cannot be done with that vendor before making any changes.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
May 12, 2021 at 7:46 pm
For those recommending modifying the code - this is 3rd party vendor code and modifying that code without the vendors permission could invalidate the contract. Some vendors will allow customizations - some will not - so make sure you understand what can and cannot be done with that vendor before making any changes.
For those saying that, I recognize that issue. The answer is always "NO" unless you ask. Like I said, rewrite the code (a copy of the code, for sure!) and test it and then hound the vendor. At the very least, though, someone needs to get on the blower with the vendor and make them aware of the problem and task them to fix it.
Heh... of course, we all just put up with such things from MS, don't we? 😉
The other thing is, has the OP followed any of the suggestions provided about rebuilding stats and maybe doing a little thoughtful index checking/maintenance? Like I said before, that's one thing that most 3rd party vendors won't take care of. They just assume the DBA will do that stuff with the rest of the server.
--Jeff Moden
Change is inevitable... Change for the better is not.
May 12, 2021 at 8:06 pm
For those saying that, I recognize that issue. The answer is always "NO" unless you ask. Like I said, rewrite the code (a copy of the code, for sure!) and test it and then hound the vendor. At the very least, though, someone needs to get on the blower with the vendor and make them aware of the problem and task them to fix it.
Never hurts to ask - and it is always a good idea to develop a very good working relationship with the vendor just for these kinds of issues. And of course, a copy of the code to prove the change works - if you can. I have had some vendors state - and outlined in the contract - that creating anything in 'their' database was not allowed - at all.
The other thing is, has the OP followed any of the suggestions provided about rebuilding stats and maybe doing a little thoughtful index checking/maintenance? Like I said before, that's one thing that most 3rd party vendors won't take care of. They just assume the DBA will do that stuff with the rest of the server.
Definitely need to review - if the vendor has not supplied that functionality as part of their solution, you definitely need to set that up yourself. They often don't include any maintenance - including backups or integrity checks and just assume the customer has someone available to set that up.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
May 12, 2021 at 8:49 pm
For my databases, I've adopted the "If it moves, shoot it" philosophy for statistics updpates. In other words, if the RowModCntr is not zero and the stats haven't been updated in the last 23 hours, I update the individual stats with a FULL SCAN. I do that every night on my most active databases and once a week on those that are less active.
That's a nice luxury to have. I can't afford the hit to the buffer cache to fully scan all my active, large tables every night (billions and billions of rows). That would definitely blow out the existing cache and massively degrade the PLE. Which may or may not be critical, depending on your box, but usually will be, since most shops are vastly more I/O bound than CPU bound.
Edit: And, with further thought, there's no chance such a process could finish fully scanning all my updated tables within one night. We have hundreds of clients, many of whom individually have billions and billions of rows.
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply