November 29, 2005 at 6:23 pm
David Poole, SQL Server guru and author here at SQLServerCentral.com forwarded me this hyperthreading article on performance degradation. He was wondering if I'd experienced issues and in general what I thought.
Here's my experience.
I have no idea. When hyperthreading processors first came out, it sounded cool and we were anxious to give it a try. Working at a large corporation, however, meant that we couldn't just go buy a new server to test with, but since we upgraded fairly regularly, we'd be sure to get one eventually. Eventually we did, it went into production after minimal testing, of which none was really done on hyperthreading v non hyperthreading for the SQL guys, and it worked, so we ignored it.
About that time we also got a new server at SQLServerCentral.com, and purchased a hyperthreading one. This was our db server and the move to separate our IIS from SQL went on. But since it was a new server and we couldn't really spend a bunch of time and effort testing the switched on v switched off performance since we like had lives, families, kids, etc., we just turned it on and didn't worry about it. We also have never really stressed the SQL box, so we're not sure if it matters.
Yesterday's QOD was on hyperthreading and the reference was Slava Oks's blog entry where he had a theory why hyperthreading can perform worse. It made sense to me and since he's one of the guys that knows how the internal SQL Server works, I figured I can believe in him.
I think this is one of those really difficult things to test because the high loads are hard to simulate. Assuming of course you you can get approval, time, money, etc. to even spare a box and generate some type of workload. Simulation software, I think, often does a poor job of actually simulating the way your server actually runs under load. Often the high load conditions I've seen in production servers came from places and areas we didn't expect, and hadn't tested as thoroughly as those that performed great. It's also a pain to run Profiler constantly and hope to actually capture a high load that stresses your box in daily business.
The bottom line is testing is really hard to do well and if your server runs well, fine. If not, you could try turning off hyperthreading (AS THE ONLY CHANGE) and see if it helps and let us know here. But if it's a chronic problem, you should just buy a new server.
A bigger server 🙂
Steve Jones
PS, The Ryli Raffle is still underway. Grab a ticket for US$20 and help a friend of mine.
November 30, 2005 at 4:16 am
I've recently Disabled Hyperthreading on a new DW Server (HP DL380).
The reduction in performace was impacting the business quite severely with HT enabled.
In comparrision the old DW Server (dual PIII 933Mhz) running the same SQL query out Performed the new DW Server (Dual Xeon 3.0Ghz).
November 30, 2005 at 8:07 am
Recently, I have been reading that Intel is scapping hyperthreading processors due to the degradation of performance and returning to processors based on the old Pentium III architecture. They were finding that the the new Pentium M's were performing better (based on PIII's) than the Pentium 4's with hyperthreading. They scapped their Pentium 4 roadmap and starting early next year they are coming out with a new series of processors. I am sure this will also affect the Xeon's too.
November 30, 2005 at 11:31 am
Based on the recent discussion that cropped up regarding HT, we decided to disable it on one of our dual Xeon processor Dell boxes. In test scenarios, we saw a very minor performance increase, but the more interesting thing was that before the change, the CPU usage tended to never max out anywhere near 100% unless under a very heavy load. Now, for the same kind of load, the CPU seems much more likely to reach close to 100% usage.
We tested this with fairly complicated queries (lots of inner and left joins and conditions) involving ~15 million rows, and saw a modest performance increase of about 8-10%.
I think it could be that the overhead of virtualizing a single physical processor into two does not give you any actual performance gain. It's kind of like saying you take a V-8 engine, split it into two V-4s and somehow it's faster. However, two V-4s would allow you to run two separate cars, whereas a single V-8 only powers a single car. Therefore, HT allows two threads to run simultaneously (but slower than on a single processor) without blocking each other. Without HT, you could run those two threads on a single processor but if one thread took up too much CPU, it would sleep the other thread until it had enough resources that were free.
Just my take on it, interesting discussion for sure!
November 30, 2005 at 2:19 pm
Wow...seems like suddenly everyone is talking about this potential problem. I read an article about this earlier this week, and the symptoms described an ETL process we've been having trouble with. Sometime's the Insert...Select query runs in minutes, other times it will run in hours and CPU utilization grows to 100%. This is a query that heavily uses parallelism. Another interesting thing is that all processors end up in lockstep sometimes; with utilization growing and shrinking the same way for all of them.
It's a BIOS setting on our DELL PowerEdge servers, but we're going to take one down and disable it. I'll let you know how that goes...
cl
Signature is NULL
November 30, 2005 at 9:13 pm
It seems like a few of you have had interesting results. I may try it here if I remember one early Sun morning and reboot the server.
And if I feel like driving to the colo
Also, we'd love some real test results if someone wants to write them up.
December 1, 2005 at 7:46 am
It's an interesting issue and I probably have some extreme views on HT < grin >
I run a 8 way cluster ( 16 with HT ) on w2003 , also with 32Gb ram. So do I see problems with HT - yes - from poor sql !!
I have spent some time eliminating "issues" with HT ( the cxpackets locks etc.) I have some poor SQL queries and I have some views from hell, I also have apps that use only embedded sql and generally perform select * - mostly with simple where clauses so selections can be huge. What I've found is that nearly all the poor queries can be optimised and in a few very extreme situations I have added maxdop hints. In all the years I've been using HT servers I've never considered turning it off in either oltp or dss applications. What I've always found is that the sql can be better optimised, a whole new ballgame and not part of this thread.
My suggestion is that queries should be analysed to find out why parallelism is giving problems and steps taken to resolve it. I have some queries which run better with parallelism - lots of testing done. Over the years I've tended to become more cynical on how I view blanket statements about subjects such as HT - I'm sure we're going to see multiple core replace HT as we're going to see 64 bit replace 32 bit which in turn replaced 16 bit, doesn't mean the technology is flawed. In general terms I believe it is recommended that on a HT server the number of availble procs for parallelism be set to the number of physical procs ( not the mask ). I have seen some attempts by our developers to ( unsuccessfully ) prove HT was giving problems to their code - I was able to show that the queries could be better optimised.
It doesn't always work - sometimes that 10 table join with 200 where clauses using or , not , in with 40 subselects just can't be better written ( well by me anyway! ) then invoking the maxdop hint is good.
Do I want to take away all the advantages of parallel index rebuilds, dbcc, backups and such - absolutely not!
my two pennyth !!
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
December 2, 2005 at 11:12 am
In the interest of keeping this thread going...
Colin, since the issue posted in the blog deal with worker threads such as the lazy writer, wouldn't this also effect index builds? Being that according to BoL you cannot set the maxdop hint on index builds, the next option you presented was setting the max degree of parallelism server option. According to the articles the problem is a fundamental flaw in the design of the processor. All this being said, I agree that you can use the software to try and mask the flaw in the hardware, but wouldn't it be simplier to just turn off HT?
Ok all that being said, I also have to wonder, since the first place I saw this issue was on cnet, if cnet is biased against Intel for some reason? Cnet also posted an article on dual core amd vs. Intel, where intel did not win a single benchmark test!
Basically, I'd like to keep an open mind on this, but everything I'm seeing seems to be in favor of disabling hyper-threading.
December 2, 2005 at 1:40 pm
"net also posted an article on dual core amd vs. Intel, where intel did not win a single benchmark test!"
Ummm...this doesn't mean they're biased, right? Could it be, gasp!, that AMDs are getting better than Intels? The answer seems to be yes, mainly due to power consumption. Here's an interesting article on Toms Hardware.
Signature is NULL
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply