How to divide 40GB database into 10 4GB Datafiles

  • I have a production database that's 43 GB.

    There are 6 tables and one table is about 39GB and there is only one datafile and it's 40GB and a tran log which is 3 GB.

    I would like to divide it into 10 datafiles so that I could backup faster.

    What would be the best way to do this?

    Thanks in advance

    Alex S
  • How about this one. You only need to make your change accordingly.

    CREATE DATABASE myDB

    ON

    PRIMARY ( NAME = myDB1,

    FILENAME = 'c:\myDB1.mdf',

    SIZE = 10000M),

    ( NAME = myDB2,

    FILENAME = 'd:\myDB2.ndf',

    SIZE = 10000MB),

    ( NAME = myDB3,

    FILENAME = 'e:\myDB3.ndf',

    SIZE = 10000MB)

    LOG ON

    ( NAME = myDBlog1,

    FILENAME = 'g:\myDBlog1.ldf',

    SIZE = 3000MB)

    GO

  • Why do you think it will backup faster if you have 10 datafiles, instead of one?

  • It is a good question. Unless your ten data files are in different physical drives or arrays, your backup will not be faster.

  • Even if your data files are on different arrays, the backup speed will also be limited by the speed that it can write the backup file. It is fairly typical for a database server to use slower arrays for backups than for the databases, so this is a likely bottleneck, especially since writing data is usually slower than reading data.

    You might need to also backup to multiple files on different arrays to get any improvement in backup speed.

  • It is rude to post the same question multiple times:

    http://www.sqlservercentral.com/Forums/Topic449122-5-1.aspx

  • Michael Valentine Jones (1/29/2008)


    It is rude to post the same question multiple times:

    http://www.sqlservercentral.com/Forums/Topic449122-5-1.aspx

    Michael i appologize for a double post, the site was very slow yesterday and i didn't realize that i clicked twice.

    Alex S
  • Michael Valentine Jones (1/29/2008)


    Even if your data files are on different arrays, the backup speed will also be limited by the speed that it can write the backup file. It is fairly typical for a database server to use slower arrays for backups than for the databases, so this is a likely bottleneck, especially since writing data is usually slower than reading data.

    You might need to also backup to multiple files on different arrays to get any improvement in backup speed.

    I thought if i could create separate datafiles and separate filegroups i'll be able to backup faster and increase performance.

    I have this setup in mind:

    Filegroup PRIMARY (least used tables) RAID 1(D drive)

    Filegroup DATA (datafile1, datafile2...etc)

    (most used tables) on RAID 10 (E drive controller 1)

    Filegroup INDEX on RAID 10 (F Drive controller 2)

    Alex S
  • But you can't divide into four filegroups anyway, since one of your tables is 39GB... unless you partition the table.

    John

  • John Mitchell (1/30/2008)


    But you can't divide into four filegroups anyway, since one of your tables is 39GB... unless you partition the table.

    John

    Thanks John

    I will look into partitioning tables.

    Alex S
  • AlexSQLForums (1/30/2008)


    Michael Valentine Jones (1/29/2008)


    Even if your data files are on different arrays, the backup speed will also be limited by the speed that it can write the backup file. It is fairly typical for a database server to use slower arrays for backups than for the databases, so this is a likely bottleneck, especially since writing data is usually slower than reading data.

    You might need to also backup to multiple files on different arrays to get any improvement in backup speed.

    I thought if i could create separate datafiles and separate filegroups i'll be able to backup faster and increase performance.

    I have this setup in mind:

    Filegroup PRIMARY (least used tables) RAID 1(D drive)

    Filegroup DATA (datafile1, datafile2...etc)

    (most used tables) on RAID 10 (E drive controller 1)

    Filegroup INDEX on RAID 10 (F Drive controller 2)

    I doubt that you will see any increase in backup performance because you are putting all the files in the DATA filegroup on the same array. The disk is still limited to the same speed.

    As I mentioned in another post, you are also limited by how fast the backup file can be written.

  • John Mitchell (1/30/2008)


    But you can't divide into four filegroups anyway, since one of your tables is 39GB... unless you partition the table.

    John

    He didn't say he was dividing into 4 filegroups.

    He wants to create ten files in the same filegroup. That will distribute the data for each table or index in that filegroup across all 10 files.

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply