Max Memory

  • We use VM Servers for everything, but were having some performance issues with our last SQL Server running as a VM.

    We ended up purchasing a physical server last fall and the IT guys got a great deal on the RAM from Dell

    (kudos to them for pushing the envelope!).

    During this upgrade, we moved to:

    Windows Server 2008 R2 (64 bit).

    Intel Xeon (R) CPU @ 2.67 GHz and 2.66 GHz (2 processors)

    (with 8 nodes between them)

    and

    SQL Server 2008 R2

    The ERP is the heart of our system (only REAL db we have with multi-user setup).

    On the old server it was around 25 GB.

    When the db was moved to the new server its now 127 GB.

    (Note - there was no additional data added)

    On average, there are about 35 users hitting this db each day.

    With the new SQL Server, its barely being used.

    I've watched the nodes get parked often through Resource Monitor.

    But overall, I would say that we've only implemented about 65% of the entire ERP system.

    In summary,

    I would say that we made a great leap forward with this last machine.

    No real performance issues after the upgrade.

  • Coming in late.

    I can't win the memory contest, but my bragging server is a 32-core system handling 3000+ connections and at peak over 600k page lookups per second. No, those aren't all table scans either 😀 It's a 54GB db on a 64GB server and disk I/O is nearly nil.

    But I'm jealous over that 16GB laptop.

  • For our BI app, we have 64 GB quad servers; we prefer to scale out.

    Our largest cube is about 300 GB.

  • We are currently building a new server with Windows Server 2008 whiich I believe will have 64 CPUs and 132 G of Memory. I think we maxed out. There will be 14 LUNS attached with 1 drive for the SQL system databases, a separate one for tempdb, 4 more for the various mdf and ndf files, another 4 for the various ldf files. the SQL reporting databases have their own LUN (Reporting Services itself runs on a separate server) There is about 1.5 TB of space altogether. This server will be clustered. We start next week as soon as our network guys have finished racking the beast.

    Francis

  • We are currently building out a new beast box to upgrade our primary database server.

    Replacing a 4 Way IA-2 + 56GB ram with a 4x X7550 (32 cores / 64 Threads) and 192GB Ram.

  • Well our server is nothing all that exciting based on this list but its 72GB. Of course its used by maybe 10-20 users and the memory is there entirely to process a set of files that come in once a month and have to be processed in under 24 hours. The rest of the month it doesn't do a whole lot. Other than that 1 day its a reporting server :).

    Kenneth

    Kenneth FisherI was once offered a wizards hat but it got in the way of my dunce cap.--------------------------------------------------------------------------------For better, quicker answers on T-SQL questions, click on the following... http://www.sqlservercentral.com/articles/Best+Practices/61537/[/url]For better answers on performance questions, click on the following... http://www.sqlservercentral.com/articles/SQLServerCentral/66909/[/url]Link to my Blog Post --> www.SQLStudies.com[/url]

  • Our biggest is a SQL Server 2005 database server with 32 GB Ram, 8 Cores, 200+ GB data

  • I have to agree about jaw dropping over 16 gigs in a laptop..

    Most laptops have a max of 8 GB.

    That must have been quite a laptop to start with!

  • aaron-403220 (3/11/2011)


    I have to agree about jaw dropping over 16 gigs in a laptop..

    Most laptops have a max of 8 GB.

    That must have been quite a laptop to start with!

    I have a 17 inch HP that tops out at 24 (8x3) but I've only got 6 in it.

    Aaron Hall
    IT Infrastructure Consultant

    Nothing is more confounding than a DBA that uses bureaucracy as a means to inflate his power. Ever try to get an index added to a government run SQL server and you'll know what I mean.

  • Here is our biggest SQL Server

    (we will most probably max it out within the next 12-15 months)

    Active/Passive cluster

    Windows 2003 R2

    SQL 2005 Enterprise SP3 CU8 (moving to SP4 CU2)

    64 CPUs

    256 GB RAM

    2 3 GB fiber connections to two different dedicated SANs

    Each cluster node is a 4 node IBM Numa node

    This houses an OLTP 2 and 3 tiered application

    Main database 1 TB

    2 secondary databases of .3 TB and .4 TB

    600+ user connections 24x7x365

    We are getting an HP DL-980 G7 box to benchmark on as well

    8 - octal core CPUs - hyper threaded to make up 128 CPUs

    1 TB (yes, a terrabyte) RAM

    8 - 8 GB fiber connections to connect to a huge' new SAN

    RegardsRudy KomacsarSenior Database Administrator"Ave Caesar! - Morituri te salutamus."

  • OLAP - 32 cores 128GB Windows 2008R2/SQL 2008 12TB of page compressed data

    OLTP - 24 cores 128GB Windows 2008R2/SQL 2008R2 1TB of page compressed data

  • Our biggest are a couple SAP BW systems (Production, Quality). With 64 GB each.

  • OCTom (3/11/2011)


    Philip Barry (3/11/2011)


    We run several servers, however, for our main OLTP cluster each node is 12 core (2 sockets x 6 core) running 128GB RAM. This was a hardware refresh from 4 x 2 core running 32 GB RAM.

    We did see a noticeable drop in I/O for reads (unsurprisingly) which pleased our SAN admin!

    I can remember bragging about 128K! 😛

    Heh... my first desktop computer had 8K and I had some serious bragging rights because I had twice the memory everyone else did.

    As side bar, I remember when I bought my first Giga-Byte hard drive. It cost $1,100 and the SCSI adapter cost another $142.

    Things have gotten almost stupid with storage greed. BAA-HAA!!! I remember that we used only 2 digit years to save "huge" amounts of disk space and now we have storage "pigs" like XML. 😛

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • We are migrating to a 2-node active-passive cluster on Win 2008/R2 with SQL Server 2008 Enterprise, with 64gb of RAM on HX5 blades. The use is data pump for HIPAA collectors, with a mix of 1500 users querying the server. Currently we're running this on a 2005 Standard Edition SQL Server with 16gb of memory. The server is being paged out of memory nightly when the hardware level network backup jobs run.

  • Jeff Moden (3/15/2011)


    ......

    I can remember bragging about 128K! 😛

    Heh... my first desktop computer had 8K and I had some serious bragging rights because I had twice the memory everyone else did.

    As side bar, I remember when I bought my first Giga-Byte hard drive. It cost $1,100 and the SCSI adapter cost another $142.

    Things have gotten almost stupid with storage greed. BAA-HAA!!! I remember that we used only 2 digit years to save "huge" amounts of disk space and now we have storage "pigs" like XML. 😛

    And see mine was an Atari 800 with 800 BYTES of memory. And that was impressive for the time. And I was jealous of my friend who had a tape drive. That's cassette tape by the way.

    Kenneth

    Kenneth FisherI was once offered a wizards hat but it got in the way of my dunce cap.--------------------------------------------------------------------------------For better, quicker answers on T-SQL questions, click on the following... http://www.sqlservercentral.com/articles/Best+Practices/61537/[/url]For better answers on performance questions, click on the following... http://www.sqlservercentral.com/articles/SQLServerCentral/66909/[/url]Link to my Blog Post --> www.SQLStudies.com[/url]

Viewing 15 posts - 31 through 45 (of 49 total)

You must be logged in to reply to this topic. Login to reply