Database Sizing

  • Our production database is growing like mad!

    from 2 gig to nearly 40 in less than 1 year.

    Archiving will be addressed, but not in the near future. Here is my question:

    Our 40 gig database lives on a Compaq Proliant server with Hardware Raid 5. At present it is a single file and the log is a single file. Space on the server is not an issue (yet). Should I begin to create multiple files for this behemoth? Any feedback would be most appreciated.

    BTW, SQLServerCentral is GREAT!

    When the snows fall and the white winds blow,The lone wolf dies but the pack survives.

    Once you've accepted your flaws, no one can use them against you.

  • By adding more files you will gain in performance. If you use several files in filegroups, you will have the paralelism that you have with Raid.

    You should check why your datafile is growing. The datafiles are growing or the log files?. If the log file is growing then you should check whats happening. It might be a problem.

    If you add log files, you won't gain performance because it writes log in a serial way. It uses the first, when that one becomes full, then uses the other and so on.

  • If the log file is large, what is data file size. You can get some performance gains with moving indexes and large tables to their own filegroups. However how many deivces are in your RAID and how much free space do you have. If you can rearrange the RAID to provide more independent RAIDS or even move to a RAID 10 setup you will find greater gains (note you will lose more drives, how many depends on which solution). If you have say 8 drives of 60 gig each and two controllers your highest performance gain would be (if supported). 2 RAID 10 arrays of 4 drives 1 on each controller (RAID 10 = mirrored strip set, cost 2 drives to get strip and 2 for mirror so you have for 60 gig HD 120 gig for 1 array).

    Also, another perofrmance gain is to move the log files themselves to a different array from the data files.

    But if none of this is an option the filegroup thing with good db sizing for log and data files, refreshing stats, updating usage, rebuilding indexes and defragging the HD can give a noticeable performance increase. The idea is to minimalize both internal and external fragmentation.

    "Don't roll your eyes at me. I will tape them in place." (Teacher on Boston Public)

  • Antares has good advice. We have a large warehouse that has 3 files in a filegroup (separate arrays) + a 4th for some indexes. Has really helped performance.

    Steve Jones

    sjones@sqlservercentral.com

    http://www.sqlservercentral.com/columnists/sjones

  • Sorry for the delay. We have 12 18.2 gig drives divided into 2 groups of 6. Each group has its own controller.

    The entire arrangement is raid 5.

    It seems to me our best option is to place the large tables with their indexes in a second filegroup.

    The reason we are growing like mad is we just migrated our accounts receivable system from a legacy mini.

    Agree? disagree? Now a 2nd question.

    MS recommends placing the system tables in the Primary filegroup and isolating application tables in different groups. Opinions about this?

    When the snows fall and the white winds blow,The lone wolf dies but the pack survives.

    Once you've accepted your flaws, no one can use them against you.

  • You should also disable the 'automatically grow file' in your database after adding more data files. When SQL server "automatically grows" datafiles, it

    increases the size of one file at a time. As soon as the added space is also

    used up, it will grow the next file. This means, that you will loose the

    benefit of proportional fill (all inserts will go into one file). So suggest to manually increase the sizes of all datafiles at the same time and by the same amount.

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply