Implemetation of Large SQL Server DB(550 GB) on Blade/SAN

  • Dear All

    Need suggestions from experts who has already worked with large database (500 GB) implementation/Environement.

    Database Size is 550 GB. we is going to implement Blade Server and SAN for this database (Windows 2003 and SQL Server 2005).

    Blade Server configuration

    32 GB RAM

    2 Quad Processor.

    OS and SQL Server will be installed on SAN.

    Database is heavily used 24/7 and highly financial oriented.

    We need best practices for the following

    1 Memory setting (Min and Max) for SQL Server

    2 CPU Setting (affinity settings) for SQL Server

    3 Virtual memory setting for Server

    4 Backup policy (15 mins data loss is ok)

    5 Tempfile recommendations

    Thanks in advance

    Regards

    Kokila

  • i personally wouldnt boot from the SAN, i would have local disks for the OS, stick everything else on the SAN. You have many options available just be sure to set the max memory setting so as not to starve the OS of memory, set the min mem too and ensure this is less than the max.

    Presumably you have windows 2003 Enterprise but what about SQL2005 version?

    32bit or 64bit?

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • Dear Perry Whittle

    SQL Server 2005 Enterprise edtion. we are going to 32- bit version of windows as well as SQL Server.

    Thanks

    kokila

  • Boot from local drives, not from SAN. Is it a clustered implementation?

    Leave the CPU affinity as is.

    You will have to enable /pae and AWE with that much memory. Set the max memory for something like 29 or 30GB. No more. The OS needs space to map memory. DO NOT set /3GB

    Set lock pages in memory (required for AWE)

    I would suggest full backups on the weekend, diff backups nightly and transaction log backups every 15 min. Make sure that you take the backups off the server.

    TempDB should have it's own independent SAN LUN (and I do mean independent). Start with 1 tempDB file, if you see contention on the TempDB allocation structures or IO bottlenecks on that LUN, split it to multiple files on multiple independent LUNs.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • you'd just do so much better to go 64 bit. I'd allow at least 4Gb for the o/s.

    if your server is new hardware then you will not need the /pae switch as most w2k3 installs include the pae kernal and DEP enables all the memory anyway. I really must finish my work with x32 and x64 memory and DEP and get it blogged - have a build up of posts to get published!

    The critical thing for san storage is to try and get dedicated LUNs and raid 10 where possible, don't belive anyone who tells you such things don't matter as a san is faster ( it isn't ) Make sure you have partition alignment set correctly on your luns too, big performanc ehit if not.

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • Dear GilaMonster

    Thanks for the reply

    May i know what will be the problem if we boot from SAN.(bcoz we have plan to install OS and SQL Server in SAN)

    We are not having plan to implement Cluster.

    My database is highly Transaction Oriented. Every 10 min log backup file will be created in 200 MB. We are using 3rd party tool to take backup and compress. we can afford only 15 mins downtime.pls suggest any backup policy for this requirement.

    can we have tempfiles on RAID 5?

    Thanks

    kokila K

  • I have the same settings as well, i am using 3rd party software to do log shipping and backups, i am using virtual machines for my servers, RAID 1 0r RAID 5 you have to decide, i am using RAID 5 for my SQL server and iam having better performance than local disk, still it is a argument you can go on with it. i would go with Gial

    all the best:)

  • What you really need (and just like I and others told you on the public forums) is to hire a professional to assist you with the setup and configuration and initial management of this system. An unskilled or low-level SQL Server DBA has a minimal chance of making a system such as this perform at a high level.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • kokila (7/31/2008)


    We are not having plan to implement Cluster.

    we can afford only 15 mins downtime.

    A 15 minute downtime limitation and no cluster? What do you plan to do if the server has a hardware failure? Are you considering a warm/hot standby? Using log shipping? Replication? DB mirroring?

    TempDB should be RAID 10 if at all possible. So should the user databases and log files. RAID 5 has the slowest write performance of the commonly used RAID levels.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • I agree with what Gila has stated ... but it sounds like you have a very highly transactional system and depending on the work being done, I'm only assuming your tempdb is going to be utilized heavily. I'd recommend making 1 data file per CPU (hyperthreading doesn't count, nor should be used anyways), preallocate a few gigs per data file, and enable the -T1118 flag. Watch the size of your tempdb data and log files, if they begin to grow outside of your allocation, grow them appropriately.

  • Adam Bean (7/31/2008)


    I'd recommend making 1 data file per CPU (hyperthreading doesn't count, nor should be used anyways), preallocate a few gigs per data file, and enable the -T1118 flag.

    That trace flag's only applicable for SQL 2000.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (7/31/2008)


    Adam Bean (7/31/2008)


    I'd recommend making 1 data file per CPU (hyperthreading doesn't count, nor should be used anyways), preallocate a few gigs per data file, and enable the -T1118 flag.

    That trace flag's only applicable for SQL 2000.

    I can't remember which big MSDN blog it was, but this was discussed in detail. Yes, in 2005 they made improvements that you shouldn't need it, but it does still in fact yield better results. I also talked to several Microsoft technicians who supported the blog stating it does help, but again is not truly needed unless you're experiencing issues.

    I'll try to find the post ... haven't had any luck yet, but this was a good read: http://sqlblog.com/blogs/linchi_shea/archive/2007/08/10/reduce-the-contention-on-tempdb-with-trace-flag-1118-take-2.aspx. Moral of the story is test it out in your environment and find out which yields better results. For the most part I doubt people need it at this point, but due to old habits dying hard, I still use it on my servers.

  • IMNSHO, trace flag usage should be reserved for addressing known, demonstrated problems that the trace flag is known to address. I do not recommend them for prophylactic purposes.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • the point about config and tuning is that you can't do it without stats and trending data that allows you to see exactly what your database is doing. multiple files are good but need placing where most appropriate - just doing things for the hell of it can be very dangerous; and counterproductive.

    In general terms I'd start at 1 file ( not filegroup ) per half a core. so 4 cores, two files.

    As for recovery, if you want to support on-line recovery then you must not use the primary filegroup and put your database(s) on a secondary filegroup. Raid 5 is only valid for read only. I can show that local disk is just about identical to san disk on raid and spindle equality, I just never understand why there's this belief the same disk in a san is quicker! A SAS or scsi 3 disk has the same performance no matter where it is located.

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • Dear Gila

    Hardware failure - We are planning to have hotspare blade server . since OS and SQL are installed in SAN, when there is failure in server level, our monitoring tool will identify this and make a failover to hotspare by attaching san to it and boot server. testing is going on

    Warm/hot standby- We are planning to use flash copy.

    Thanks

    Kokila K

Viewing 15 posts - 1 through 14 (of 14 total)

You must be logged in to reply to this topic. Login to reply