May 12, 2010 at 6:39 am
Hi there!
I have a problem I'm facing. I have two tables; one called LOGGING and the other one ARCHIVE. The tables looks exactly the same.
For every DELETE in the LOGGING-table I want to the INSERT the very same row into the ARCHIVE-table.
Fine, I thought a trigger would do the trick.
CREATE TRIGGER TESTTRIG ON LOGGING AFTER DELETE AS
BEGIN
INSERT INTO ARCHIVE (DATE, LOGGINNUMBER)
SELECT DATE, LOGGINNUMBER FROM DELETED
END
It works when the table is limited to about 1 million records. The thing is that the LOGGING table can contain 20 million records in some cases. When I want to delete 10 million records, the Transaction Log becomes full and the transaction would fail.
The databases are set to FULL recovery model.
Is there some cool way of changing the TRIGGER to, let's say, only handle 50 000 records at a time in every transaction?
May 12, 2010 at 8:02 am
I would suggest changing the DELETE statement to commit in smaller batches and let the trigger take care of itself.
---------------------------------------------------------------------
May 12, 2010 at 8:04 am
I second George here.. Control the DELETE to be done in chunks! This will help you better!
Cheers!
May 20, 2010 at 8:18 am
Yes, maybe that the best solution. Or limit it somehow so that you can't do deletes on one gillion records in beteween full backups 🙂
Thanks for the replies!
May 20, 2010 at 8:48 am
make that tranlog backups, full backups do not truncate the log.
---------------------------------------------------------------------
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply