February 13, 2009 at 10:47 am
We have a machine with 4 dual core processors (ie. total of 8), with 2 db-engine instances.
We are thinking of adjusting the processor affinity settings so that each instance is restricted to use only half of the processors (4 cpus or 2 cores).
Is this good practice generally on servers hosting multiple SQL instances, and how would we be able to gauge whether that change improved the CPU utilization on the server?
I have read about signal waits in the past as they relate to CPU pressure and the sys.dm_os_wait_stats view. Should I just add up the signal waits from that view before and after the change to be able to determine whether restricting cpu affinity per instance was a good decision?
Any feedback on how to go about this would be appreciated.
__________________________________________________________________________________
SQL Server 2016 Columnstore Index Enhancements - System Views for Disk-Based Tables[/url]
Persisting SQL Server Index-Usage Statistics with MERGE[/url]
Turbocharge Your Database Maintenance With Service Broker: Part 2[/url]
March 2, 2009 at 2:05 am
One thing you can look for with sp_who is suspended processes. If you see a lot of suspended processes.. this could be solution. This may or may not make a difference based on how much throughput each instance has.
"Who then will explain the explanation? Who then will explain the explanation?" Lord Byron
Viewing 2 posts - 1 through 1 (of 1 total)
You must be logged in to reply to this topic. Login to reply