February 8, 2003 at 12:00 am
Comments posted to this topic are about the content posted at http://www.sqlservercentral.com/columnists/cmiller/gettingridofexcessfilesandfilegroups.asp
February 15, 2003 at 9:52 am
There can still be performance gains with multiple filegroups even if they are on the same volume. SQL Server will use 1 I/O thread per physical file for accessing data. So multiple logical file groups (each with at least one file) and file groups with multiple files (data striped across the files) can yield performance enhancements versus having fewer physical data files. Do some research and check it out. I would like to see an in depth article on this topic.
March 5, 2003 at 11:23 am
Here's another, potentially simpler, way to move everything into one filegroup. I've used this method on databases up to about 800 Mb.
I found out (by accident, of course) that when using DTS to copy a database to another server, it puts all of the objects into the default filegroup. I had a database that had originally been set up with 3 filegroups, and used this "feature" of DTS to successfully relocate all of the objects to a single filegroup.
Here's what I did:
1. Created a new database of appropriate size, etc. (As I am often copying databases to different servers, I have gotten in the habit of creating the database and associating the logins to it by hand, to avoid any potential issues with orphaned users -- this is obviously just my personal preference.)
2. Right-clicked on it, selected All Tasks -> Import Data... to bring up the DTS Import/Export Wizard.
3. Indicated the source and destination databases.
4. Clicked "Copy objects and data between SQL Server databases."
5. Selected "Create destination objects" and "Copy data." (Deselected the security options included in "Default Options" since I had already set up the logins.)
At this point, I ran the DTS job, which creates a series of files including ones named:
[server name].[database name].TAB (ddl for tables)
[server name].[database name].ID1 (ddl for clustered indexes)
[server name].[database name].ID2 (ddl for nonclustered indexes)
[server name].[database name].DR1 (ddl for clustered primary keys)
[server name].[database name].DR2 (ddl for nonclustered primary keys)
While the PK and index DDL does include the Fill Factor, none of these scripts indicate the filegroup, so they all take the default value, which would be PRIMARY unless set otherwise.
This method won't copy the statistics, but SQL Server should regenerate them. I, too, prefer to use scripts rather than Enterprise Manager; in this case, I feel that using DTS is a reasonable compromise. I get the ease of use of the GUI interface, but all of the action is captured in the scripts that DTS creates.
Eileen Gaisser
Database Administrator
Eastman Kodak Co.
March 5, 2004 at 7:35 am
Regarding the often underemphasized I/O benefit that comes with using separate files, the following comes from BOL "Using Files and Filegroups" ---> "Recommendations":
Place different tables used in the same join queries in different filegroups. This will improve performance, due to parallel disk I/O searching for joined data.
March 10, 2004 at 5:08 pm
I am glad that I got rid of those stats tables...so I can removed all the empty filegroups!
Thanks!
DBA/OLAP Developer -
San Francisco, CA
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply