January 12, 2021 at 1:52 pm
How can we calculate the memory and cpu utilisation of sql for the last 3 months ?
January 12, 2021 at 4:54 pm
I think that depends on the tools you have and what you mean by memory and CPU utilization. I expect you are looking for an average load, but looking at a 3 month period may not give you what you want.
If you strictly have SQL Server and no monitoring tools in place, I expect you won't find much if anything that can be helpful. But if you are doing regular monitoring of your servers, you should have a rough idea of what the CPU and memory looks like on any given day.
I generally only monitor mine for spikes and if they are expected or not. If my CPU is running at 60% most of the time but spikes to 90% at certain times, that means that most of the time 40% of my CPU is being wasted. But on the other hand, if it is usually at 90% and sometimes drops to 60%, then I would say it is resourced well. An example where this may happen is if you have scheduled downtime daily (company operates 9-5). In this case, I would expect the CPU to be low during downtime and high during company uptime. If you are grabbing an average for the day and have a scenario like this, you may think your CPU is underutilized as from 5 PM until 9 AM the CPU resources will be low.
As for memory, I want that to be used to at LEAST 90% as often as possible. More memory used means less memory wasted. If it is sitting at 50% utilized for any length of time, I need to look into re-allocating the memory to other SQL instances on the same machine. the only reason I try to keep it around 90% (depends on how much is total, but I try to keep at least 4 GB free for OS and other processes, sometimes higher for scheduled tasks or intermittent applications that are high RAM users such as SSIS or SSRS).
So your question of how to calculate it is it depends. Are you looking for the CPU average for the last 3 months? if so, you'd need to look at how often the metrics are being collected and do some math on it. And take into account downtime (or don't... depending on what you want to see).
Metrics on memory is much easier because it shouldn't be fluctuating much once SQL has been running past a peak time. SQL doesn't like to give the memory back to the OS once it has it which is why configuring your MAX MEMORY value is very important.
The above is all just my opinion on what you should do.
As with all advice you find on a random internet forum - you shouldn't blindly follow it. Always test on a test server to see if there is negative side effects before making changes to live!
I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.
January 13, 2021 at 1:56 pm
The only way you can tell this is if you have a monitoring system set up. Either one you build yourself using Perfmon counters, or a 3rd party tool. There's nothing built in to SQL Server that automatically captures this kind of information. Additional options include setting up Query Store to capture query metrics on a given database, or Extended Events to capture query metrics, or other metrics, that will show CPU usage. After setting these up, you could then make that determination, as you want per Brian's answer above. Another option, not one I'm crazy about mind you, is to use the system counters DMV to query the data, say, once an hour or something, put that away into a table, and then use it to aggregate over time. Again, not my preferred method, but some people balk at Perfmon. Best bet, 3rd party tool. They've done all the heavy lifting for you.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
January 18, 2021 at 7:42 am
thanks for the replies .
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply