July 9, 2012 at 4:58 am
I took some stats away from PerfMon and did some analysis - I've got a quad-core, hyperthreaded (2x) presenting a total of 8 cores, server. I took an hour's worth of good data and had a look - and I've noticed P6 and P7 are using proportionally more (+20%) processor time than the other cores. Is a 'wonky' dataset worth worrying about? All processor values are within good thresholds.
The reason I ask is I'm looking into changing the CPU affinity mask settings (and correspondingly the I/O mask settings) to improve SQL Server's performance. Version is 2005 Standard on Server 2003.
Microsoft's article here -> http://msdn.microsoft.com/en-us/library/ms189629.aspx states that:
To carry out multitasking, Microsoft Windows 2000 and Windows Server 2003 sometimes move process threads among different processors. Although efficient from an operating system point of view, this activity can reduce Microsoft SQL Server performance under heavy system loads, as each processor cache is repeatedly reloaded with data. Assigning processors to specific threads can improve performance under these conditions by eliminating processor reloads; such an association between a thread and a processor is called processor affinity.
However it doesn't give any further details or recommended reading from someone who has actually tested this. Does anyone have any tips or links to further information about improving performance in this way?
Thanks
---
Note to developers:Want to get the best help? Click here https://www.sqlservercentral.com/articles/forum-etiquette-how-to-post-datacode-on-a-forum-to-get-the-best-help (Jeff Moden)
My blog: http://uksqldba.blogspot.com
Visit http://www.DerekColley.co.uk to find out more about me.
July 9, 2012 at 5:48 am
Just this side of never.
They're two settings that probably shouldn't be changed in the majority of cases. If you force affinity, you take away SQL's ability to move schedulers between processors. With default affinity if there are 8 cores there are 8 schedulers (plus a couple hidden ones), but the schedulers aren't bound to the cores, so if SQL goes to run a thread on scheduler 1 and CPU 1 is maxed out by something outside SQL, it has the option and the ability to put that scheduler onto CPU 2.
Set affinity and you remove that option, scheduler 1 is bound to CPU one and has to run there.
I suppose if you have something outside of SQL that always hammers a specific CPU you might want to unset that processor for SQL, but I generally wouldn't suggest it. If you want to limit the CPU that a SQL instance uses on a server, use Windows System Resource Manager instead (http://technet.microsoft.com/en-us/library/cc755056.aspx)
IO affinity - http://blogs.msdn.com/b/psssql/archive/2010/11/19/how-it-works-io-affinity-mask-should-i-use-it.aspx
You will have to test carefully and thoroughly on your hardware with your workload to see if there's any benefit.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
July 9, 2012 at 6:32 am
Thanks Gail. I've taken a 4-hour data set now and although there is uneven distribution of processor use, it doesn't vary by more than 10 percentage points from top to bottom. Furthermore the average and median use is circulating around 40% so there doesn't seem to be a pressing need to change anything at the moment.
Thank you for the detailed answer and the link.
---
Note to developers:Want to get the best help? Click here https://www.sqlservercentral.com/articles/forum-etiquette-how-to-post-datacode-on-a-forum-to-get-the-best-help (Jeff Moden)
My blog: http://uksqldba.blogspot.com
Visit http://www.DerekColley.co.uk to find out more about me.
July 10, 2012 at 9:31 am
I will guess that you are on VERY old hardware given your OS and SQL Server version. The best thing you can do in that case is to move to newer hardware and newer edition of OS and SQL Server.
Also, have you tuned everything up nicely? That is actually the first thing to be done and in my experience I have NEVER been to a client yet (in 15 years of consulting) that didn't have a BUNCH of low-hanging fruit to SIGNIFICANTLY improve performance/scalability/concurrency.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
July 10, 2012 at 11:18 am
In the process of tuning up and fixing the obvious deficiencies, only been here a couple of weeks so far. There's no shortage of targets 🙂 And I believe new hardware is on the way.
---
Note to developers:Want to get the best help? Click here https://www.sqlservercentral.com/articles/forum-etiquette-how-to-post-datacode-on-a-forum-to-get-the-best-help (Jeff Moden)
My blog: http://uksqldba.blogspot.com
Visit http://www.DerekColley.co.uk to find out more about me.
July 7, 2015 at 4:19 pm
I know this is an old post but...
I have a scenario where setting processor affinity might make sense, please tell me it I am wrong. I am setting up a SQL Express on a virtual machine which means the I can only use 1 vCPU to run SQL. My virtual machine sole purpose is to run SQL for 10 users and I was thinking that if I gave the virtual machine 2 vCPUs and set SQL to use the second vCPU it would perform better than having a virtual machine with only one vCPU. My understanding is with virtual machines is that since all of its resources are virtual and the hyper-visor is diving up its resources to the VM that CPU Affinity and NUMA issues would not occur. Wouldn't it be better to have the OS using vCPU1 as its starting point and SQL Express run on the second vCPU? The machine that is currently fulfilling the role that this new VM is going to replace is about 10 years old running SQL 2005 Standard on a dual core processor that is also a Domain Controller and a file server and works without any issues. I await your advice.
July 8, 2015 at 1:41 am
preston 34832 (7/7/2015)
Wouldn't it be better to have the OS using vCPU1 as its starting point and SQL Express run on the second vCPU?
No, because the OS isn't bound to CPU0 any longer and, iirc, in more modern versions prefers to use the highest CPU to avoid conflicting with old apps.
Leave the CPU affinity at default, let SQL and Windows manage the CPUs.
p.s. In future please post new questions in a new thread.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
July 8, 2015 at 12:33 pm
OK, Thanks for the info.
July 8, 2015 at 7:31 pm
derek.colley (7/10/2012)
In the process of tuning up and fixing the obvious deficiencies, only been here a couple of weeks so far. There's no shortage of targets 🙂 And I believe new hardware is on the way.
Don't forget to read the second envelope when the new hardware doesn't help so much. :-P;-)
--Jeff Moden
Change is inevitable... Change for the better is not.
July 15, 2015 at 12:10 pm
We are standing up 2 new servers Active\Passive on top of Netapp (SAN) , we were going to dedicate 2 blades of the new server to SQL but the NetApp guy has convinced the VM guy that affinity is the way to go, but to give SQL Server Preference. (it is SQL2012 now)
I am not convinced, and all the resources I have read, (like this thread) give me the impression, NEVER do this.
So I am wanting to make sure this preference hasn't changed, if so what it changed too, or a good argument to fight with.
Thanks in advance;-)
July 15, 2015 at 3:07 pm
swoozie (7/15/2015)
We are standing up 2 new servers Active\Passive on top of Netapp (SAN) , we were going to dedicate 2 blades of the new server to SQL but the NetApp guy has convinced the VM guy that affinity is the way to go, but to give SQL Server Preference. (it is SQL2012 now)I am not convinced, and all the resources I have read, (like this thread) give me the impression, NEVER do this.
So I am wanting to make sure this preference hasn't changed, if so what it changed too, or a good argument to fight with.
Thanks in advance;-)
It's going to be real tough to convince such experts so let the machines do the talking for you. Take a performance baseline of the old systems and, when they're done, take an identical baseline of the new systems. Compare and make a decision from there. I don't know if it's possible for them to be right but only the machines will know for sure.
--Jeff Moden
Change is inevitable... Change for the better is not.
July 15, 2015 at 3:24 pm
Jeff Moden (7/8/2015)
derek.colley (7/10/2012)
In the process of tuning up and fixing the obvious deficiencies, only been here a couple of weeks so far. There's no shortage of targets 🙂 And I believe new hardware is on the way.Don't forget to read the second envelope when the new hardware doesn't help so much. :-P;-)
I know I've read about the 3 envelopes, but for the life of me I can't remember it.
July 16, 2015 at 6:35 am
I have no Original system to compare to, unfortunately. What we are standing up is completely new. Short of hitting the guy over the head with a lead pipe to stop the use of Affinity, I am not sure how to sell the other option.
:crying:
July 16, 2015 at 8:50 am
swoozie (7/16/2015)
I have no Original system to compare to, unfortunately. What we are standing up is completely new. Short of hitting the guy over the head with a lead pipe to stop the use of Affinity, I am not sure how to sell the other option.:crying:
Since I've not messed with such settings, I don't know for sure but it seems to me that such settings could be changed on the fly. If his argument holds any salt at all, he should be more than happy to fire up the boxes without the changes, do some demonstrable testing to satisfy you and then add the new settings and do the same demonstrable tests. If you pose such a challenge as education proof rather than a challenge of his knowledge, he might participate. If he doesn't, then his argument for the settings changes don't hold any water because he hasn't proven what he says. Also remember that supposed "Best Practices" fall into several levels....
1. Absolute bunk that enough people have quoted as a "best practice" and have become so just because a million uninformed people are doing it.
2. Conditional in that for general systems, it may be a "best practice" but not for every instance. In other words, it's a guideline to help protect those that might not know deeper details.
3. Documented Microsoft recommendations. These are normally great recommendations but MS doesn't know everything either. They should be treated as you would item #2 above.
4. Demonstrably proven. These are normally the best but with a strong warning. They can and sometimes do fall into 1 of the other 3 categories above because the original hypothesis or method of proof is bunk. Just because someone proves something with code or measurements doesn't mean the code or measurement is actually correct.
So, both you and the other guy need to do the experiment with the spirit of cooperation where the only goal is to prove the given hypothesis one way or the other. Both of you should be very happy to do such a thing because you'll both be given the biggest gift you could ever hope for... knowledge of what is actually true instead of hearsay or conjecture. If the other guy says he's proven it time and again, tell him he should be good at performing such proofs and to please prove it one more time.
Remember that it's not who is right here. It's a matter of what is right for this instance and no egos should come into play because you're both trying to do what is the best for the instance.
Sorry for the lecture on what you probably already know. I'm just trying to encourage both you and him to do the demonstrable testing so that you both know for sure.
--Jeff Moden
Change is inevitable... Change for the better is not.
December 3, 2015 at 12:41 pm
I was having significant performance issues on a new server with large recordset processing. After months of Microsoft support not finding what the issue was, I unchecked “Automatically set processor affinity mask for all processors” and checked the individual CPUs. I had also already changed the MaxDOP = 0 (it was even faster than 1 or 3 on a 4 CPU instance). Results are blazing fast! So the bug must be that the “Automatically set processor affinity mask for all processors” box does not actually use all the CPUs, it must only see one of them. Manually selecting all the CPUs was required. This was true for both the SQL2008R2 and SQL2012 instances on 2 separate servers that I tried it on.
Viewing 15 posts - 1 through 15 (of 17 total)
You must be logged in to reply to this topic. Login to reply