more cores vs higher CPU speed

  • we're looking to buy new server hardware next month and i have to configure it properly. for an OLTP system running SQL 2005 and Windows Server 2003 x64 would we benefit more from more cores or higher CPU frequency?

    the server will process approximately 20 million commands/transactions a day most of which aren't large data sets. mostly selects. around 1500 connections at a time.

    at first i was thinking to go with the new 6 core CPU's under the theory that more cores will allow more threads to run at once. now i'm thinking that since it's an older OS and version of SQL we may benefit more from the higher speed, but we rarely see the CPU utilization go above 50% on that server.

    the CPU's are going to be the new Xeon 5600's, but i'm not sure whether to buy the quad core low power ones or the 6 core SKU's

  • Server 2003 x64 and SQL Server 2005 should be able to use all those cores just fine. I think you will benefit more from the parallel processing of multiple cores.

  • You also have to think about licensing costs which may or may not be at the core level.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • jvanderberg (7/22/2010)


    Server 2003 x64 and SQL Server 2005 should be able to use all those cores just fine. I think you will benefit more from the parallel processing of multiple cores.

    Depends on the edition of SQL Server. See http://msdn.microsoft.com/en-us/library/ms143760(SQL.90).aspx for the official specifications.

    For an OLTP workload, parallelism will be a very rare animal. Many DBAs disable parallelism on this type of workload at the server level (though this can be overridden at the query level using the MAXDOP query hint).

    As far as the original question is concerned, all things being equal I would go with the higher number of cores too - SQL edition issues permitting - but for different reasons.

    If the current SQL Server licence is per-processor, that's per physical CPU, not per core by the way.

    Paul

  • As I understand it, most DBAs disable parallelism for a single queries because there are concurrency issues that Microsoft has not truly resolved. I was thinking more along the lines of 4 queries (or 400) running across 4 processors, not 1 query running across all 4.

    Please, correct me if I'm on the wrong base here.

  • we're on per server/CAL licensing and we do have parallelism disabled. i played with it on another server but it's something you have to set right. it's hard to replicate the workload in testing and no one is going to let us test in production with performance issues.

  • I don't think it matters very much...the CPU (assuming you've got any Xeon made within the last 5 years) is rarely the bottleneck on such systems.

    Take a look at the configurations used to set SPEC speed records, and you can find single-core single CPU systems with 64GB+ of RAM and, like, 172 spindles before the CPU becomes the limiting factor.

    My own preference is higher clock speed, because not everything can be parallelized. More importantly, look to your disk subsystem: use RAID 10 (so called, I mean 1+0 of course) because everything else punishes your Write performance. For OLTP it's about Write performance, so don't listen to those who tell you RAID-5 is the way to go! And get lots and lots of drives, and properly balance the Channels you put them on. And don't skimp on RAM...it's almost certain to be more important than any choice you make on the CPU.

  • I agree licensing will get you every time.:w00t:

  • dan.nutley (8/12/2010)


    I don't think it matters very much...the CPU (assuming you've got any Xeon made within the last 5 years) is rarely the bottleneck on such systems. Take a look at the configurations used to set SPEC speed records, and you can find single-core single CPU systems with 64GB+ of RAM and, like, 172 spindles before the CPU becomes the limiting factor.

    It very much depends on the workload. Many systems have enough RAM to ensure that a very large fraction of the database's working set is in memory, so most I/O is logical rather than physical. TPC benchmarks are deliberately designed to stress physical I/O.

    My own preference is higher clock speed, because not everything can be parallelized.

    But SQL Server will run one scheduler per core with many worker threads per core, so while a single batch might not use parallelism, many batches can run simultaneously on separate worker threads on each core. I would much rather have two 2GHz cores than a single 3GHz CPU for that reason.

    More importantly, look to your disk subsystem: use RAID 10 (so called, I mean 1+0 of course) because everything else punishes your Write performance. For OLTP it's about Write performance, so don't listen to those who tell you RAID-5 is the way to go! And get lots and lots of drives, and properly balance the Channels you put them on.

    OLTP is generally much heavier on read than write. Write-caches (particularly large ones associated with SANs) can make the difference between RAID levels' write performance hard to measure. Many RAID solutions exist (RAID 5, 6, 50, 60 etc) and each has a valid use case based on price/performance.

    And don't skimp on RAM...it's almost certain to be more important than any choice you make on the CPU.

    And yet more RAM = more CPU usage and less physical I/O since there is a greater chance of finding a page in the buffer pool already.

  • Paul,

    Yes, certainly the specifics of the workload dictate how much CPU you need. Your points are all valid, but mine still stands: rarely is the CPU the bottleneck. Also, for OLTP workloads it is often Write performance that is the bottleneck (whether or not there are more Reads than Writes performed) and the other RAID configurations you mention are actually not appropriate--RAID 10 is.

    If you want to test whether that's true, look at the configurations for TPC benchmarks where either a DB manufacturer or a Hardware manufacture is trying for bragging rights by setting the fastest benchmark. If you want to win, you use RAID 10. At least, I've never seen one that didn't, set a new record. Have you?

    As for RAM, there are far too many people who, when they want a fast DB server, buy a quad CPU (with however many cores each) server, then configure it with 4GB of RAM. Their processors loaf along at 5 or 10% loads and they get bad performance overall...because they are waiting on their million-times-slower disk subsystem instead of having what they need in RAM a good portion of the time. They'd be way better off with one CPU and 16GB of RAM (unless their whole database is smaller than that).

    That said, for this particular application it can't actually be determined without benchmark testing, which of his two choices of processor would be best. And my caution to him is that if his OLTP workload has significant disk IO going on, especially if it is Write ops, then his choice of processor is not likely to matter, because neither one of them is ever likely to hit 60% utilization. But again, without an actual test, that's just an educated guess.

  • I think we'll just have to agree to disagree on this one 🙂

  • The thing is, if they don't even use 50% of the current CPU, something else is the bottleneck and they wont see any increase with more core of faster cpu.

  • Sometimes vendors allow you to bring a server in for testing. They'll loan you a server for a month or so. Why not see if you can get one of each and run your own performance testing?

    Steve

  • dan.nutley (8/13/2010)


    ...for OLTP workloads it is often Write performance that is the bottleneck

    I've found that, as with all other applications, it's usually not the hardware that causes the poor write performance... rather, it's almost always a mix of poor table/index design and performance challenged code.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • dan.nutley (8/12/2010)


    I don't think it matters very much...the CPU (assuming you've got any Xeon made within the last 5 years) is rarely the bottleneck on such systems.

    I agreed with you.... until I worked with rebuilding multi-gigabyte PAGE compressed indexes on SQL Server 2008 R2 Enterprise, which were CPU bound on a quad socket, quad core per socket (16 cores) machine built two years ago even when all 16 cores were in use... and the disk is a mere 5-disk RAID5 shared out to everything, OS included (Resource Monitor shows 200-300MB/s disk to disk transfers).

    With PAGE compression, at least, some operations have become CPU bound on reasonable machines.

Viewing 15 posts - 1 through 14 (of 14 total)

You must be logged in to reply to this topic. Login to reply