March 25, 2008 at 12:11 pm
There is a way to get around this problem, but it involves a lot of legwork and can be very time-consuming to run. Basically what you will be doing is duplicating the steps taken by SQL under the hood when you issue the ALTER TABLE statement, but breaking it into chunks.
1) Put the database in single-user mode (this prevents the table from being written to by another process).
2) If the recovery model is not already set to SIMPLE, do so now.
2) Script the original table, with all indexes (you will need this later).
3) Create a new table with the new structure. If you have indexes, do not create them at this point, with the exception of the identity field(s). This could be a simply INT IDENTITY field, or a composite key used to uniquely identify a row.
4) Insert data into the new table from the old, using the TOP is a reasonable increment). For example, for a table with 3 million rows, use TOP 300000.
5) Delete the data from the original table by joining to the new table.
6) Shrink the log.
7) Repeat steps 4 - 6 until all data have been copied over.
8) Drop the original table.
9) Rename your new table to the original name.
10) Run the scripts for your indexes (this could take a while, but believe me, it's a lot faster than trying to insert into the table with them in place).
11) If the recovery model was originally something other than SIMPLE, set that now.
12) Take the database out of single-user mode.
All this can be put into a SQL Agent job to run off-hours. Sure, it's ugly, but it gets the job done. And, if disk space is REALLY tight, you can insert a log shrink step between steps 4 and 5.
Hope this helps!
Cogiko ergo sum (I geek, therefore I am).
March 25, 2008 at 12:22 pm
What I did when faced this issue with 250M rows table was:
1.Create a new tmp_table
2.drop all indexes from production table except PK
3.Insert data in the new table by 50M rows using Export/Import utility
4.Create needed indexes, rename and drop the old one after testing.
It takes time but saves space.
March 25, 2008 at 12:34 pm
When I faced same issue with 250M rows table I did:
1.Create tmp_table with the same structure
2.Drop all indexes from production table except PK
3.Import data by 50M rows using Import/Export utility
4.Create indexes, rename tables and drop the old one after testing.
This takes time but saves space.
Viewing 3 posts - 16 through 17 (of 17 total)
You must be logged in to reply to this topic. Login to reply