June 17, 2003 at 4:37 pm
I was bored, so I investigated our production database and discovered it had a single table with over 17 milliion rows in it (impressive!). A second table had over 9 million rows. I thought it might be good to isolate these monsters in a second filegroup.
So I made a copy of production on a second server, created a large 2nd filegroup and attempted to copy the tables to new group using the GUI. I set the recovery model to Simple to avoid excessive log entries.
This process ran very slowly and the server seemed to choke. I did some research and discovered the process generated a 10 Gigabyte log file! Our SQL Server could not run with that enormous file. I had to shut off the SQL services, manually delete the log file, reactivate the EM and delete the database.
I wonder if there is a way to copy tables to a filegroup without creating such bloated log files? Anybody have any info?
When the snows fall and the white winds blow,The lone wolf dies but the pack survives.
Once you've accepted your flaws, no one can use them against you.
June 17, 2003 at 6:06 pm
try using the bcp utility.
bcp out the table into the flat file, drop the table and recreate it on the second filegroup, then bcp in the file into the table. use the batch param so that it wll commit on a set number of rows.
hope this help.
June 17, 2003 at 10:22 pm
Good advice. I'm wondering if there is a way to do this that will maintain the indexes? Maybe DTS?
spring, summer, fall now
dawn breaks on cold, white magic
Winter's chill is nigh
When the snows fall and the white winds blow,The lone wolf dies but the pack survives.
Once you've accepted your flaws, no one can use them against you.
June 18, 2003 at 4:15 am
Script out the indexes is your best bet. Or as neder suggested do the bcp out of the data then instead of dropping the table and moving to the new filegroup do TRUNCATE TABLE instead and move the exiting table (plus indexes) to the new filegroup as normal. Then bcp back in the data.
June 18, 2003 at 8:36 pm
I would suggest to put the indexes back once all the data are loaded. doing it this way will yield better performance.
June 19, 2003 at 6:16 am
Setting recovery model to simple will not make any difference as it actually logs everything but truncate on every checkpoint. Logging will not be done for for bulk and truncate table statement in simple model.
my aproach for transfer a table is to drop and recreate clustered index specifying other filegroup. This will drop all the non clustered indexes but that you have to recreate again....
There will be less logging and your table will be moved with zero fragmentation....
Caution: This process needs atleast 3 times of space of table on the destination filegroup
appriciate if you let us know what method worked for U....
Cheers..
Prakash
Prakash Heda
Lead DBA Team - www.sqlfeatures.com
Video sessions on Performance Tuning and SQL 2012 HA
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply