New Servers with lots of options - looking for success & horror stories!

  • Hey! As SQL DBA at my site, we've got an opportunity to move to new horses for our database servers. We've got a lot of transactional activity (our logs tend to build 40-60GB/day) and lots of queries against both smaller tables supporting our interactive webs and large queries against our statistical data (currently in relational tables...pondering SSAS for those). But my question is: We're getting two big SQL boxes - Windows 2008 Enterprise 64-bit, SQL Server 2008 Enterprise. The machine has 4 hex-core CPU's, 96 GB of ram, and 900GB of usable drive space in a RAID 10 configuration of 300GB 15k drives. The sales rep is suggesting we step that down to RAID 10 to get 1.4TB in the array - which we could live in for a while - but we want a little more elbow room. We've got an option to add an iSCSI array (not sure the exact specs on it, but they say they run Oracle DB's on iSCSI drives). So I'm thinking...should I put the system db's (including master) on the local RAID, the database tranlogs on the RAID (for peak performance), and the larger MDF's on the iSCSI drives. Has anyone had experience with similar configurations? If so, I'd love to hear both success and horror stories around performance and stability.

    Thanks!

    Tim Shay

  • I am confused by the RAID 10 and RAID 10 part

    We're getting two big SQL boxes - Windows 2008 Enterprise 64-bit, SQL Server 2008 Enterprise. The machine has 4 hex-core CPU's, 96 GB of ram, and 900GB of usable drive space in a RAID 10 configuration of 300GB 15k drives. The sales rep is suggesting we step that down to RAID 10 to get 1.4TB in the array

    Are you doing clustering between the 2 SQL servers?

    With so much space and drives, you may want to break out DATA, LOG, TempDB spindles with different RAID configuration

    Although iSCSI/SAN is usually fast enough to mask any initial disk problems 🙂

    SQLServerNewbieMCITP: Database Administrator SQL Server 2005
  • LOL... that's what I get for posting something like that at the tail end of an all nighter! Yeah... the 2nd RAID 10 was supposed to be 'RAID 5'... so stepping down from 10 to 5 in RAID gives us less fault-tolerance and (I've heard) slightly less throughput...but increases space available dramatically!

    We're looking at mirroring, but right now looking at it in a manual-failover sense. We may end up with Server A being Principal for database 1 and mirror for database 2, and Server B being the opposite...so that each is carrying production transactions but is also an emergency fallback if one of the two is lost.

    Thanks!

    Tim

  • Raid 5 is not the best of ideas for transaction logs. Logs are write-heavy and raid 5 has the slowest write performance of any of the common raid levels.

    If you must, data and backups on raid 5, logs on raid 10.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Thanks! If budget were no object, I think I'd go with a much larger set of RAID 10, separating the heavy IOs on separate spindles... but the guys we're working with have a 'shopping cart' type approach to servers, and I've maxed out the drive bays. I can't complain, though...the iSCSI partition - from what I hear - can be added to on-the-fly if we grow our DB faster than we can clean up after ourselves!

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply