September 17, 2014 at 7:16 pm
For installing SQL Server 2008 on a physical server, how should server administrator budget for CPUs?
The database will have a "normal" amount of activity and processing. There will be some ETL processing and OLAP cube creation. It won't be too out of the ordinary.
1) # of CPUs. By choosing slower CPUs, I could go with a larger number of them. This seems like a bad idea because licensing costs.
2) I could go for the fastest Ghz processors. This makes sense because I don't think a big cache or a fast cache would be of great benefit. It keeps the licensing costs down too.
3) CPUs with big caches. Maybe for OLTP activity it could help.
4) CPUs with fast caches. Maybe for OLTP activity low-latency caches could help more than having a big cache.
5) a balanced combination of the above the same way you would provision a general-use server
I know memory is important. Maybe a discussion like this is inseparable from a RAM budget. I just want to know if the latency of caches and the size is important for SQL Server 2008 optimization. Can someone give me some guidance based on financial constraints and picking appropriate processors?
September 17, 2014 at 7:52 pm
Heh... I don't know what "normal" is anymore when it comes to things like this. With that in mind...
If it's a new box that will be built, start with 4 core (the faster, the better but be wary of the "fastest"... could be unnecessary cost). Give one to the operating system and 3 (it's not 2012 so you can probably get away with that) to SQL Server. Make sure there's an extra socket or two so you can upgrade if you need to.
External cache is always nice but a "normal" system should be good to go without it especially if you bought decent CPUs. Like memory, cache is important (because it IS a form of memory).
Don't skimp on RAM. Buy as much as you can and make sure that there's room to add more. If you wanted a "starter" set, start with at least 64GB (8 will go to the OS and 56 will go to SQL Server). It would be better if you could go with 128GB if the budget allows but, like I said, make sure the box can take more and without having to change out all the memory in the first set, if you can.
I don't know what you have in mind for disks and controllers. A decent fiber attached SAN would probably be the best.
The real key is going to be what you put into it for code. Crap code will drag even the best of systems down. Adopt coding standards and make sure that everyone understands that peer reviews (for GUI and T-SQL) are going to happen and will be enforced. Let no one promote their own code to production. Get managment buy-in that doing it right the first time will save 8 times the work and expense in troubleshooting and rework. Then, monitor for code that becomes a performance problem because scalability might have been missed when developing the code.
--Jeff Moden
Change is inevitable... Change for the better is not.
September 17, 2014 at 8:21 pm
I generally read most anything Glenn Berry writes on hardware specific to SQL Server. Latest article on sqlperformance.com is a good read on processors.
You can also get his book for free from RedGate's store:
Shawn Melton
Twitter: @wsmelton
Blog: wsmelton.github.com
Github: wsmelton
September 17, 2014 at 9:47 pm
For processors, I want good CPUs with fast, small caches? A large cache on each processor probably wouldn't help.
64 GB of RAM seems like way more than I would need. Does bus speed matter for RAM?
If I don't go with a SAN, I think I want to do this with 15k RPM hard drives:
RAID 5 for MDFs (databases)
RAID 5 for LDFs (TLogging) (not as many drives here as the first RAID 5)
RAID 0 for TempDB (one disk)
September 18, 2014 at 2:39 am
I'll second the recommendation to read Glenn's stuff.
Processor caches are there to reduce the amount of times that a CPU is spinning idle waiting for data from memory. Small caches + slow bus will likely mean that the CPUs wait more often for data from main memory. Is that a problem? Well, depends on workload. If your workload is bottlenecked on disk, the CPUs will be idle anyway.
RAID 0 for TempDB? You want to be in a situation where a single drive failure takes the SQL instance offline until replaced? And you don't RAID single drives. The RAID stands for 'Redundant Array of Independent/Inexpensive Disks'
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
September 18, 2014 at 2:53 pm
Maintainer (9/17/2014)
For processors, I want good CPUs with fast, small caches? A large cache on each processor probably wouldn't help.64 GB of RAM seems like way more than I would need. Does bus speed matter for RAM?
If I don't go with a SAN, I think I want to do this with 15k RPM hard drives:
RAID 5 for MDFs (databases)
RAID 5 for LDFs (TLogging) (not as many drives here as the first RAID 5)
RAID 0 for TempDB (one disk)
I also have to second the recommendation to visit Glenn Berry's site. I thought you were looking for a 100,000 foot opinion, not a design spec. 😛
And the phrases "fast" and "small cache" are normally an oxymoron when it comes to CPUs. :hehe:
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply