Hello ,
The SQL Server Windows NT - 64 bit" in Task Manager: CPU high 56,961 MB , the overall server CPU is now 95 % kindly advise the solution.
October 5, 2022 at 10:07 am
The memory is high, not CPU.
CPU is only at 1.2%
What you are seeing is completely normal for SQL Server, it is very memory intensive.
Just verify you have set the SQL Server setting ‘max server memory’ accordingly, and if you have there is nothing to worry about.
I would also recommend reading the memory management pieces from this e-book also
https://www.sqlservercentral.com/books/troubleshooting-sql-server-a-guide-for-the-accidental-dba
October 5, 2022 at 11:07 am
Hello ,
Thank you . I will check the book
I found the max server memory == > 2147483647 MB
for now, Shall Preallocate SQL transaction log? Are there options to decrease that memory?
The transaction log and the memory setting have no correlation to each other.
You should treat each of them as different things to look at.
OK so max server memory at 2PB, that's the default setting, so yes I would recommend you change this to a suitable value.
Generally the rule of thumb is leave 10% to the OS, but this is a generalised rule, your preferred settings may be more, or less to leave.
So I guess you have 64GB RAM in that machine, leaving 10% thats 6.4GB RAM to leave to the OS, I like to always round up, so 7GB to the OS.
64GB-7GB = 57GB, so set max server memory to 58368MB
Having SQL use 95% of the memory is perfectly healthy in SQL Server world, do not worry about it using so much RAM if you have set the correct level of RAM you want it to use in the settings like above.
October 5, 2022 at 12:09 pm
Hello ,
Thank you so much really appreciated it, one last question so no need to restart the service.
October 5, 2022 at 12:13 pm
No, the max server memory setting is a dynamic setting so you can change this without restarting the service.
If you look in sys.configurations DMV there is an is_dynamic column, if that is 0 it needs a restart, if it is 1 it can be done online with a reconfigure command.
October 5, 2022 at 12:18 pm
MAX DOP should also be changed as well as CTFP value as they likely on default (0 and 5 respectively)
DOP depends on how many CPU's you have - CTFP I normally change to 50 and adjust depending on workload on server.
October 5, 2022 at 12:55 pm
Just so we're clear, CTFP = Cost Threshold for Parallelism.
And I agree. Changing it from the default value is a must. In addition to picking an arbitrary value (common practice is anywhere from 30-50), you can use data to determine a good threshold.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
October 5, 2022 at 2:01 pm
Hello ,
Would you please recommend the Value of MAX DOP ?
There are 2 Numa Nodes : one contains 8, the other 24 CPUs
October 5, 2022 at 2:27 pm
Microsoft's recommendations are pretty good, and, I think, based on Jonathan Kehayias' guidance.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
October 5, 2022 at 5:29 pm
I've found that the "sweet spot" for the 32 core machine at one of the companies I do work for is a MAXDOP of 4. Their biggest problem is the code they've written... is "performance and resource" challenged at would use all of the CPUs if you let it.
I used to run with the rules of thumb that no one gets more than 1/4 of the machine and no one gets more than 8 CPU. We needed to temporarily drop to a MAXDOP of 4 for about a week and, in the category of "strange but true", most of the night-time large batch runs ran twice as fast. It did turn out to be the splitting and regathering of streams. That's also yet another proof (to me) that "throwing hardware at a performance problem" usually doesn't work. "Performance is in the code... or not", but don't count on hardware saving your keester.
Again, that's just been my observation. And, yes, there have been places where a larger MAXDOP does help a lot (Index Rebuilds, for example... I'm sure there are others) but, for a general setting, I use MAXDOP 4. YMMV.
--Jeff Moden
Change is inevitable... Change for the better is not.
October 5, 2022 at 7:10 pm
Microsoft's recommendations are pretty good, and, I think, based on Jonathan Kehayias' guidance.
So basically - keep it at 8 or lower unless you can specifically identify a scenario where increasing beyond that value is warranted.
With that said it also depends on the nature of the system. For an OLTP system I would start at no more than 2 and only increase it after identifying the workload would benefit from allowing more CPU's for parallel processes.
You can override the value for specific databases and/or specific processes. For example, you could increase MAXDOP for integrity checks, index rebuilds, backups, etc.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
November 5, 2023 at 1:36 pm
Hello ,
The SQL Server Windows NT - 64 bit" in Task Manager: memory high 10 gp , the overall server memory is now 95 % kindly advise the solution.
This is almost a duplicate of the original question. The answers provided apply here as well.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
Viewing 15 posts - 1 through 15 (of 19 total)
You must be logged in to reply to this topic. Login to reply