August 20, 2012 at 11:35 am
Hello, we've got a lot of data to purge from a DB that's in Full recovery mode. This is easy for us to do in a loop in Simple mode but there can be transaction log space issues in our environment.
Tell me if you think this script is reasonable; backing up the T-log every so many deletes. Not sure how big the backup will get.
DECLARE @iMinTimekey INT, @dMinDate DATE, @Msg VARCHAR(500)
SET @dMinDate = CONVERT(VARCHAR, DATEADD(mm, -6, GETDATE()), 112)
SET @Msg = 'Deleted 1m rows'
WHILE EXISTS (SELECT TOP 1 * FROM AcdProd..TCD_Raw
WHERE AP_LOG_DATE < @dMinDate)
BEGIN
DELETE TOP (1000000) FROM AcdProd..TCD_Raw
WHERE AP_LOG_DATE < @dMinDate
RAISERROR(@Msg, 1, 1) WITH NOWAIT
BACKUP LOG AcdProd TO DISK = N'k:\SQL Backups\Application\AcdProd.trn'
WITH NAME = 'ACD transaction log backup', INIT, COMPRESSION
END
If this an ok approach it would be nice to query the amount of internal free space the log has each iteration, like what DBCC SQLPERF(LOGSPACE) returns. Only do the backup if it's at a certain threshold.
Thanks,
Ken
August 20, 2012 at 11:59 am
Are the log backups likely to take over the disk you're putting them on? If not, then this makes sense. Might want to add an explicit Commit to the delete. And might want to play with checkpoint options as well, though issuing a backup command should cover that one.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
August 20, 2012 at 12:03 pm
You may also want to check out this article:
August 20, 2012 at 12:48 pm
Thanks folks, and thanks for the article!
Ken
August 20, 2012 at 12:59 pm
Do you care about being able to restore the system to a point in time after this delete process? If not, then I don't see a problem - however, if you do then you have a major problem that breaks the log chain.
When you backup to a single device, and initialize that device - you are throwing away all of those transactions. If you needed to restore to a backup prior to this delete, and roll forward through all of the transaction logs to the current point in time - you would end up stopping at the first transaction log backup in this delete statement.
If you have an agent job created for backing up the transaction log - execute that job instead of creating your own command.
If you don't care about being able to restore past this point, then I would recommend the following:
1) Perform a full backup
2) Switch recovery model to simple
3) Perform your deletes - replace the backup log with a checkpoint (can't backup the log in simple recovery)
4) Switch recovery model back to full
5) Perform either a full backup or a differential to reset the log chain
With the above, you would still be able to restore from the full backup, the differential - and following transaction log backups. You would not be able to restore to a point in time between the full and differential though.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
August 20, 2012 at 1:11 pm
Yeah, I was going to say, I'm overwriting the backup each time with INIT. So as you say the tranlog contains either the 1st set of deletions or the last, I'm not sure. But the log chain is broken. NBD in this case. So in 2008 they removed BACKUP WITH TRUNCATE TRUNCATE_ONLY. I guess this was just too easy and dangerous to do.
Ken
August 20, 2012 at 1:17 pm
Actually, if you're breaking the log chain anyway, you could just set to Simple Recovery for the duration of the delete process, and run a checkpoint after each batch. Then you don't have to worry about the log backup file getting too big or whatever. I've done that kind of thing many times.
When complete, set back to Full Recovery, and run a backup immediately. If you run one before and one after, you can restore anything except changes during the process. But you'll lose those this way too.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
August 20, 2012 at 1:30 pm
GSquared (8/20/2012)
Actually, if you're breaking the log chain anyway, you could just set to Simple Recovery for the duration of the delete process, and run a checkpoint after each batch. Then you don't have to worry about the log backup file getting too big or whatever. I've done that kind of thing many times.When complete, set back to Full Recovery, and run a backup immediately. If you run one before and one after, you can restore anything except changes during the process. But you'll lose those this way too.
You can reinstate the log chain with either a full or differential backup.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
August 20, 2012 at 1:48 pm
If you are purging more data than you are keeping, and you have a maintenance window... You can move the data that you want to keep to a new table, drop the old one, rename the existing. Of course, this assumes you don't need point in time recovery for any of that.
Jared
CE - Microsoft
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply