April 26, 2012 at 8:43 am
Hi,
I was just wondering what would be your opinion on setting a baseline standard for SQL server 2008 if I were to opt for maximum 12 CPU cores with regards to RAM?
Is there a default sweet spot or minimum amount of RAM other than operating system requirements that I should aim at putting in with the 12 cores?
i.e. 16GB/24GB/48GB etc?
Many Thanks
April 26, 2012 at 9:02 am
32 or 64 bit ?
MVDBA
April 26, 2012 at 9:03 am
64 bit.
April 26, 2012 at 9:07 am
in general 4Gb RAM for each CPU
April 26, 2012 at 9:08 am
is that written anywhere...?
April 26, 2012 at 9:18 am
if I said that it was going to be an OLTP database server would that make a huge difference?
April 26, 2012 at 9:19 am
yes for each quad core cpu you have to allocate 4 GB ....
April 26, 2012 at 9:22 am
nathanr 81822 (4/26/2012)
64 bit.
in that case it all edpends on how big and how volatile your data is.
the bulk of the memory will be used for buffer cache, but if your data changes very quickly then it won't stay in cache too long and your disks will take a hammering. you will need 4 GB for the OS and i would say most 12 core systems would have enough cpu power to run enough queries and processes to chew up 8GB easily - therefore your minimum is 12GB
remember also that the 1st core of your last CPU is used primarily for networking (so in a 4 cpu, 4 cores per chip it would be core 13) so factor that into any calculations
it's proboably also a good idea to go for a server with hot swap ram as well - that way you can minimize the downtime if you need more (or if a block of ram goes bad)
MVDBA
April 26, 2012 at 9:27 am
Not sure I follow any of this advice. CPU and Memory are entirely different resources and the amount you'll require of each before you hit contention will depend on your workload. It doesn't necassarily follow that the more CPUs you have, the more memory you'll need. It depends more on database sizes and workloads.
When you're talking about sweet spots, what about the price/capacity sweet spot?
If you want to arrive at an accurate figure, you have to put in the hard work and analyse the database sizes, number of users, sample workloads and expected performance from your user base.
April 26, 2012 at 9:34 am
michael vessey (4/26/2012)
nathanr 81822 (4/26/2012)
64 bit.in that case it all edpends on how big and how volatile your data is.
the bulk of the memory will be used for buffer cache, but if your data changes very quickly then it won't stay in cache too long and your disks will take a hammering. you will need 4 GB for the OS and i would say most 12 core systems would have enough cpu power to run enough queries and processes to chew up 8GB easily - therefore your minimum is 12GB
remember also that the 1st core of your last CPU is used primarily for networking (so in a 4 cpu, 4 cores per chip it would be core 13) so factor that into any calculations
it's proboably also a good idea to go for a server with hot swap ram as well - that way you can minimize the downtime if you need more (or if a block of ram goes bad)
Thanks for your answer Mike! Great info. I still would like to find some official documentation benchmarking with OLTP mentioned but I will most definitely bump up the ram instead of just having 8GB - as I have 2 CPU's. 2x6core cpus with minimum 4GB Ram per CPU and 4GB RAM minimum for OS. I can always provision out the RAM then using MSSQL Server leaving 4GB for the OS instead of having to live with in the boundaries of MSSQL's way of handling how much memory is needed and vice versa.
Thanks
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply