single vs multiple files

  • Our company supports VB apps running against SQL 2K database. Hardware configuration varies from customer to customer, but most are on high end server, cluster, Raid5, etc.

    At several of our sites, the databases are in the 150 GB  range. To date, we have been structured as single MDF database. Our DB is high transaction, with several tables 10 million rows or more, with constant reads, writes and updates.

    We are considering breaking the DB into multiple file groups, but are not sure if we will get the performance boost that would make this worth the effort. We have already moved tempdb to a separate file, and would like to split off indexes and data from large tables that are often linked in queries.

    Has anyone had any experience with this, and is it worth the effort?

  • It can be worth the effort.  Are you on a SAN though?  One big thing is to get the transaction logs off of RAID 5.  Move them to RAID 10.  This would be a big "bang for the buck" move.  Need to know more about your t/sec, reads/writes, disk subsystem to answer the other question properly.

    Derrick Leggett
    Mean Old DBA
    When life gives you a lemon, fire the DBA.

  • I have done some performance tuning for an electricity company, where we did the following - and got heaps of extra performance:

    1. Created 2 filegroups and split into 9 files on SAN RAID 10. (18 x 72GB 15K SCSI HDD)
    2. Transaction logs also on it's own RAID 10,
    3. Also formatted the stripesize and clustersize to 64KB (SQL server writes 8 pages at one time = Extend = 64KB)
    4. Purchased RamDisk from Speeddisk.com, created a RamDisk and moved tempDB to this drive (gave 35%) (Make it NTFS clustersize 64KB)

    Also make sure that Memory optio is set to dynamic.

  • I used 128KB clustersize, might that be detrimental to performance or means lots of waster space?

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply