taking advantage of proportional fill

  • I have a db with 5 data files of varing sizes:

    File_1 : 70gb

    File_2 : 76gb

    File_3 : 80gb

    File_4 : 40gb

    File_5 : 30gb.

    from what i understand to take advantage of proportional fill, all files need to be roughly the same size.

    as these files already exist, how do i resize them so that they are all roughly the same size?

  • these are all in the same filegroup right?

    decide what size you want them to be...........

    dbcc shrinkfile(name,sizeyouwant) the ones that are larger

    alter database...modify file the ones you want to make larger

    do this at a quite time.

    If a file already contains more data than you want to shrink to it won't shrink that far, either up the size of all of them or live with that.

    ---------------------------------------------------------------------

  • of course thinking about this....

    SQL fills depending on amount of free space in an effort to fill the files at about the same time (see BOL on this). So having the files different sizes doesn't matter so much as how full they are.

    If you grow some of the files they will probably be emptier and more i/o will go to them until things even up. So unless you are expecting future growth this is probably a waste of time.

    ---------------------------------------------------------------------

  • One way would be to move all the data to one file remove all the empty data files. Add the new files to the database with the sizes you need and distribute the data uniformly across all the files.

    The important thing to keep in mind is that the proportional fill is dependent on the free space in each of the files. So you must have almost the same amount of free space in each data file.

  • SA (3/25/2009)


    One way would be to move all the data to one file remove all the empty data files. Add the new files to the database with the sizes you need and distribute the data uniformly across all the files.

    that way would be intrusive and take a while to do, and leave you with one much bigger file, then you have to recreate the files you just deleted. As these files would be empty so i/o would be skewed until it all the new files were as full as the one you left. You can't force SQL to write to particular files in a filegroup, its round robin or nothing.

    I still feel the OP would gain nothing by doing this.

    ---------------------------------------------------------------------

  • It's best to just set the files to equal size, and to set the growth size equal on each file.

    As you add data over time, the files with less data will tend to fill up faster. If you are running re-indexing jobs, it will tend to redistribute the data fairly quickly as it rebuilds large tables with clustered indexes.

  • that way would be intrusive and take a while to do, and leave you with one much bigger file, then you have to recreate the files you just deleted.

    IMO, this would allow you to

    1. Lay the files on the filesystem differently to improve performance, if the hardware allows that flexibility

    2. Eliminate physical disk fragmentation that may have been caused by these files having auto-grown to their current size

    As these files would be empty so i/o would be skewed until it all the new files were as full as the one you left. You can't force SQL to write to particular files in a filegroup, its round robin or nothing.

    As suggested, simply moving the data in to one big file wouldn't be sufficient, once the files have been created you would have to distribute the data in a way that all the data files are proportionately filled. SQL Server would then almost immediately start taking advantage of proportional fill.

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply