November 22, 2019 at 9:24 pm
We have a data import & processing process that often longer than expected, often delaying the time at which users can begin working.
I've determined that the CPU runs around 80% for the 1-2 hours the process runs (90 minutes is the SLA), but rarely hits 100 percent.
I've found some queries that look at DMV data to determine if CPU pressure exists, and results are negative. However DMV data these are based on is cumulative since the last time SQL service started, and CPU runs < 30% most of the day (this is also an Availability Group that fails over once a month, so each instance is mostly idle during alternating months except for AG replication)
Is 80 percent CPU during a time-critical process by itself justification to add additional cores, or is there more homework to do?
November 22, 2019 at 10:37 pm
from my own experience if your cpu is at 80% during etl/processing then CPU is not even being used as well as it should/could - I would look elsewhere.
Some of my ETL's have a straight 95% cpu for 1 - 2 hours with disk near max IO.
As an initial looking point get perfmon stats - for IO, cpu and network - these can be saved and analysed afterwards to see where it was maxed - assuming it is maxed in any of the points.
And then I would start looking at which queries of your processes are taking the longest and see if they can, individually, be improved.
November 23, 2019 at 12:14 pm
Start collect the perfmon and in parallel run something like server side trace, whoisactive or extended events. You can compare the code run when you have high cpu or iops.
Query tuning or rewrite is much needed than increasing cpu.
Muthukkumaran Kaliyamoorthy
https://www.sqlserverblogforum.com/
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply