July 22, 2010 at 8:30 am
we're looking to buy new server hardware next month and i have to configure it properly. for an OLTP system running SQL 2005 and Windows Server 2003 x64 would we benefit more from more cores or higher CPU frequency?
the server will process approximately 20 million commands/transactions a day most of which aren't large data sets. mostly selects. around 1500 connections at a time.
at first i was thinking to go with the new 6 core CPU's under the theory that more cores will allow more threads to run at once. now i'm thinking that since it's an older OS and version of SQL we may benefit more from the higher speed, but we rarely see the CPU utilization go above 50% on that server.
the CPU's are going to be the new Xeon 5600's, but i'm not sure whether to buy the quad core low power ones or the 6 core SKU's
July 22, 2010 at 8:38 am
Server 2003 x64 and SQL Server 2005 should be able to use all those cores just fine. I think you will benefit more from the parallel processing of multiple cores.
July 22, 2010 at 11:23 am
You also have to think about licensing costs which may or may not be at the core level.
--Jeff Moden
Change is inevitable... Change for the better is not.
August 10, 2010 at 8:50 pm
jvanderberg (7/22/2010)
Server 2003 x64 and SQL Server 2005 should be able to use all those cores just fine. I think you will benefit more from the parallel processing of multiple cores.
Depends on the edition of SQL Server. See http://msdn.microsoft.com/en-us/library/ms143760(SQL.90).aspx for the official specifications.
For an OLTP workload, parallelism will be a very rare animal. Many DBAs disable parallelism on this type of workload at the server level (though this can be overridden at the query level using the MAXDOP query hint).
As far as the original question is concerned, all things being equal I would go with the higher number of cores too - SQL edition issues permitting - but for different reasons.
If the current SQL Server licence is per-processor, that's per physical CPU, not per core by the way.
Paul
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
August 11, 2010 at 7:23 am
As I understand it, most DBAs disable parallelism for a single queries because there are concurrency issues that Microsoft has not truly resolved. I was thinking more along the lines of 4 queries (or 400) running across 4 processors, not 1 query running across all 4.
Please, correct me if I'm on the wrong base here.
August 11, 2010 at 11:47 am
we're on per server/CAL licensing and we do have parallelism disabled. i played with it on another server but it's something you have to set right. it's hard to replicate the workload in testing and no one is going to let us test in production with performance issues.
August 12, 2010 at 2:02 pm
I don't think it matters very much...the CPU (assuming you've got any Xeon made within the last 5 years) is rarely the bottleneck on such systems.
Take a look at the configurations used to set SPEC speed records, and you can find single-core single CPU systems with 64GB+ of RAM and, like, 172 spindles before the CPU becomes the limiting factor.
My own preference is higher clock speed, because not everything can be parallelized. More importantly, look to your disk subsystem: use RAID 10 (so called, I mean 1+0 of course) because everything else punishes your Write performance. For OLTP it's about Write performance, so don't listen to those who tell you RAID-5 is the way to go! And get lots and lots of drives, and properly balance the Channels you put them on. And don't skimp on RAM...it's almost certain to be more important than any choice you make on the CPU.
August 13, 2010 at 12:27 pm
I agree licensing will get you every time.:w00t:
August 13, 2010 at 12:48 pm
dan.nutley (8/12/2010)
I don't think it matters very much...the CPU (assuming you've got any Xeon made within the last 5 years) is rarely the bottleneck on such systems. Take a look at the configurations used to set SPEC speed records, and you can find single-core single CPU systems with 64GB+ of RAM and, like, 172 spindles before the CPU becomes the limiting factor.
It very much depends on the workload. Many systems have enough RAM to ensure that a very large fraction of the database's working set is in memory, so most I/O is logical rather than physical. TPC benchmarks are deliberately designed to stress physical I/O.
My own preference is higher clock speed, because not everything can be parallelized.
But SQL Server will run one scheduler per core with many worker threads per core, so while a single batch might not use parallelism, many batches can run simultaneously on separate worker threads on each core. I would much rather have two 2GHz cores than a single 3GHz CPU for that reason.
More importantly, look to your disk subsystem: use RAID 10 (so called, I mean 1+0 of course) because everything else punishes your Write performance. For OLTP it's about Write performance, so don't listen to those who tell you RAID-5 is the way to go! And get lots and lots of drives, and properly balance the Channels you put them on.
OLTP is generally much heavier on read than write. Write-caches (particularly large ones associated with SANs) can make the difference between RAID levels' write performance hard to measure. Many RAID solutions exist (RAID 5, 6, 50, 60 etc) and each has a valid use case based on price/performance.
And don't skimp on RAM...it's almost certain to be more important than any choice you make on the CPU.
And yet more RAM = more CPU usage and less physical I/O since there is a greater chance of finding a page in the buffer pool already.
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
August 13, 2010 at 3:28 pm
Paul,
Yes, certainly the specifics of the workload dictate how much CPU you need. Your points are all valid, but mine still stands: rarely is the CPU the bottleneck. Also, for OLTP workloads it is often Write performance that is the bottleneck (whether or not there are more Reads than Writes performed) and the other RAID configurations you mention are actually not appropriate--RAID 10 is.
If you want to test whether that's true, look at the configurations for TPC benchmarks where either a DB manufacturer or a Hardware manufacture is trying for bragging rights by setting the fastest benchmark. If you want to win, you use RAID 10. At least, I've never seen one that didn't, set a new record. Have you?
As for RAM, there are far too many people who, when they want a fast DB server, buy a quad CPU (with however many cores each) server, then configure it with 4GB of RAM. Their processors loaf along at 5 or 10% loads and they get bad performance overall...because they are waiting on their million-times-slower disk subsystem instead of having what they need in RAM a good portion of the time. They'd be way better off with one CPU and 16GB of RAM (unless their whole database is smaller than that).
That said, for this particular application it can't actually be determined without benchmark testing, which of his two choices of processor would be best. And my caution to him is that if his OLTP workload has significant disk IO going on, especially if it is Write ops, then his choice of processor is not likely to matter, because neither one of them is ever likely to hit 60% utilization. But again, without an actual test, that's just an educated guess.
August 13, 2010 at 3:44 pm
I think we'll just have to agree to disagree on this one 🙂
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
September 30, 2010 at 4:27 am
The thing is, if they don't even use 50% of the current CPU, something else is the bottleneck and they wont see any increase with more core of faster cpu.
October 1, 2010 at 6:30 am
Sometimes vendors allow you to bring a server in for testing. They'll loan you a server for a month or so. Why not see if you can get one of each and run your own performance testing?
Steve
October 1, 2010 at 7:18 am
dan.nutley (8/13/2010)
...for OLTP workloads it is often Write performance that is the bottleneck
I've found that, as with all other applications, it's usually not the hardware that causes the poor write performance... rather, it's almost always a mix of poor table/index design and performance challenged code.
--Jeff Moden
Change is inevitable... Change for the better is not.
October 1, 2010 at 1:06 pm
dan.nutley (8/12/2010)
I don't think it matters very much...the CPU (assuming you've got any Xeon made within the last 5 years) is rarely the bottleneck on such systems.
I agreed with you.... until I worked with rebuilding multi-gigabyte PAGE compressed indexes on SQL Server 2008 R2 Enterprise, which were CPU bound on a quad socket, quad core per socket (16 cores) machine built two years ago even when all 16 cores were in use... and the disk is a mere 5-disk RAID5 shared out to everything, OS included (Resource Monitor shows 200-300MB/s disk to disk transfers).
With PAGE compression, at least, some operations have become CPU bound on reasonable machines.
Viewing 15 posts - 1 through 14 (of 14 total)
You must be logged in to reply to this topic. Login to reply