October 6, 2004 at 12:29 pm
Is it possible to do a delete that does not get logged? The reason I am asking is because I have a table with 13 gig of data in it and I only need half the data, so I can't truncate the table. The other dilema I have is that I don't have a lot of spare disk space to let the logs grow. When I deleted 1.5 million records the logs blew to 2 gig. I would love to just do a delete that isn't logged or minimally logged? I can't seem to find such an option though. Is there a way to do something like this?
I don't really want to sit there and keep deleting a few hundred thousand records and then shrink the log file. Not to metion the data is not that easy to chunk out like that.
Do I have any other options?
October 6, 2004 at 12:34 pm
Why not set the Recovery model = SIMPLE, SELECT INTO a "temp" table the data you want to keep, TRUNCATE the "real" table and then INSERT the data back? After that reset the Recovery model.....
Good Hunting!
AJ Ahrens
webmaster@kritter.net
October 6, 2004 at 12:38 pm
The database is set to SIMPLE mode right now. I do not have an additional 7 gig that I can put half the data into and then delete the original table. The main problem is that I am limited on size and the transaction logs just keep getting really large as I delete large chunks of data.
Any other suggestions?
October 6, 2004 at 1:04 pm
With the database set to Simple deletes will still be logged, but the log will be truncated upon completion of the delete. If you work on smaller sets of data this should be acceptable.
delete from bigtable where keycolumn in
(select top 1000 keycolumn from bigtable)
Reducing the number of records deleted should keep your log from growing very much. I would experiment with the number of records you may want to try 10,000 at a time.
October 6, 2004 at 1:11 pm
If you really have no options for space, and don't want to BCP the [desired] data out, truncate, and BCP back in (you know BCP supports select statements now)...
I had a similar problem a few years ago on a very large database.
I had to do a crude klugy proc with a loop that did a
set rowcount X,
delete...,
truncate log with no_log,
checkpoint,
repeat until no rows left to delete...
October 6, 2004 at 1:18 pm
John, hey great thinking. Thanks for the tip. I just needed a way so the logs didn't get so big. This will definitley help. Ugly, but gets the job done. That's what I was looking for.
Thanks Again
October 7, 2004 at 9:41 am
I did something similar some time ago - deleting 10M of rows from a table with 20+M rows - in a production database, during normal operation and with log backups every 15 minutes. I used the way already described above (deleting in small chunks, about 20000 at a time); to make it easier and to decrease the impact on server resources, it was written as a command in SQL Agent, started every 10 minutes. This way it ran day and night, for some 3 days, until everything was done, without any complaints from users.
HTH, Vladan
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply