August 7, 2003 at 7:28 am
I need to alter a rather large table (86 million rows) to add a new column with int data type and a default value. The problem is that the disk space is already low and transaction log does not have the space needed to grow during the process.
Is there any other way than freeing more disk space?
Thanks.
August 7, 2003 at 7:50 am
Hi ojanent,
quote:
Is there any other way than freeing more disk space?
shrinking tlog
skrinking db
not using EM
doing this in times of low overall activity
beg your boss on your knees for a larger disks
Cheers,
Frank
--
Frank Kalis
Microsoft SQL Server MVP
Webmaster: http://www.insidesql.org/blogs
My blog: http://www.insidesql.org/blogs/frankkalis/[/url]
August 7, 2003 at 8:00 am
It's been a while since I tried this so I'm not sure if it will work for you, but I would add the column and make it nullable. Then in multiple transactions(eg million rows at a time) put your default value on it, and then change the column to not be nullable.
Give it a try and let us know.
Thanks Tom
August 8, 2003 at 2:19 am
Hi ojanent,
There is a workaround. We encountered the same problem. Adding two DateTime columns (with defualt values) to a 55Gb db required 22Gb of log space.
The workaround was to create the columns as nullable first. Then run through the rows and update 10000 rows at a time where the value was null. The fact that we could control the commit every 10000 rows meant that the log file grew only to 100Mb. Also this reduced the time required by a factor of 3.
The number of rows that you update at a time will impact log file space linearly, so if you need to reduce it to 1000 rows.
Cheers, ifx.
August 8, 2003 at 7:18 pm
I would certainly try the other solutions that have been suggested first, but if you really need to modify a large table on a disk-space-limited system, you may have to resort to using BCP to export the table, drop it, create the new structure, and BCP IN (or BULK INSERT) to read it back in.
One trick I came up with is to go into Enterprise Manager table design mode and insert the new field(s), but instead of executing it I save the change script. This is a good idea if you have a lot of indexes or foreign keys that have to be dropped and re-created, because they will all be in the script. Keep all the index and key drop/create statements but modify the middle of the script where it creates a duplicate table and copies the data, so instead of create temp - copy data - drop original - rename temp the sequence should be export data - drop original - create new structure - bulk insert. You can use xp_cmdshell to execute "bcp queryout", or you could run bcp from a command prompt separately. If your server is really short of space you can export to a share on another server, and the bulk insert doesn't require all the log file space. It doesn't hurt to execute the script in two parts, and verify that you have a good copy of the data before deleting the original (and do a backup first).
But if getting more disks or a larger server is even a remote possibility, don't let them know that there is an alternative.
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply