March 10, 2011 at 9:11 pm
Comments posted to this topic are about the item Max Memory
March 10, 2011 at 9:21 pm
We host a few 64GB RAM systems (clusters - Physical), but I know some Banks in South Africa run even more...
The standard on the virtual machines are usually 32GB which is poving to be enough at this stage... :w00t:
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
This thing is addressing problems that dont exist. Its solution-ism at its worst. We are dumbing down machines that are inherently superior. - Gilfoyle
March 11, 2011 at 1:19 am
We run 64GB on our two clusters. Its the most the server can take anyway.
March 11, 2011 at 1:52 am
Our biggest system is an currently installed SQL 2008 R2 Cluster with 98 GB of RAM per node and 12 cpu cores.
March 11, 2011 at 2:07 am
To be honest I think my jaw dropped when Steve said his *laptop* has 16Gb of RAM in it! I can only dream of having hardware like that available--we upgraded our primary database server last year to one with 8Gb of RAM... :w00t:
March 11, 2011 at 2:33 am
We run several servers, however, for our main OLTP cluster each node is 12 core (2 sockets x 6 core) running 128GB RAM. This was a hardware refresh from 4 x 2 core running 32 GB RAM.
We did see a noticeable drop in I/O for reads (unsurprisingly) which pleased our SAN admin!
Regards,
Phil
March 11, 2011 at 3:26 am
Our 2 new servers has 140gb of memory in each,
March 11, 2011 at 3:48 am
In our company, we're using 4 node with 4x8 cores CPUs and 128GB RAM as of now, since we used the G7 blade servers that is upgradable to 1TB of memory.:-D
March 11, 2011 at 3:50 am
this is turning into a brag blog 😀
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
This thing is addressing problems that dont exist. Its solution-ism at its worst. We are dumbing down machines that are inherently superior. - Gilfoyle
March 11, 2011 at 4:17 am
My previous company was in the process of building and testing some new servers for Risk Analysis (Insurance market) and they were 128GB RAM and either 4x 8 core or 8x 4core CPUs.
My current place, well lets not go there... massive system, constant archiving and deletion, 800gb+ prod DB..... 16GBs and 4x 2 core CPUs and an old SAN that is in the process of being replaced.
March 11, 2011 at 5:12 am
currently, the company i am at has everything running on VM's. the ESX servers all have 32 GB of RAM each and SQL is given anywhere from 16 GB to 4 GB depending on the size of the app.
March 11, 2011 at 5:37 am
We have 512gig running on DL980 8x8core clustered, SQL2008 R2. Database size is over 500gig. Thousands of connections and processing payroll and HR for 700,000+ employees.
March 11, 2011 at 6:19 am
Hi.
Sql Server 2008 @ 128Gb RAM, 32 Core
March 11, 2011 at 6:22 am
My desktop (at 8GB) has as much or more memory as any of the servers at my current client. Most I've ever seen was at one of the banks - 64GB, that was about 3 years ago.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
March 11, 2011 at 6:33 am
Philip Barry (3/11/2011)
We run several servers, however, for our main OLTP cluster each node is 12 core (2 sockets x 6 core) running 128GB RAM. This was a hardware refresh from 4 x 2 core running 32 GB RAM.We did see a noticeable drop in I/O for reads (unsurprisingly) which pleased our SAN admin!
I can remember bragging about 128K! 😛
Viewing 15 posts - 1 through 15 (of 49 total)
You must be logged in to reply to this topic. Login to reply