November 28, 2011 at 3:31 pm
I'm confused. What space are you trying to save? You started posting as though you were having log space issues with this delete. Then you stated you kept two weeks of backups on disk.
Now you're saying that a full backup is too large?
It's not clear what the problem is. If you perform the delete, you do not have the same space in the full backup. If you can't do a full backup, you can't do one. Nothing changes that other than deleting more data. But that isn't related to the data deletion.
If you need to recovery pre and post delete, you can do a diff, as George suggested, but it's not clear where you are having space problems and at what time. Perhaps you can outline the entire issue a little more clearly.
November 28, 2011 at 3:40 pm
Steve Jones - SSC Editor (11/28/2011)
I'm confused. What space are you trying to save? You started posting as though you were having log space issues with this delete. Then you stated you kept two weeks of backups on disk.Now you're saying that a full backup is too large?
It's not clear what the problem is. If you perform the delete, you do not have the same space in the full backup. If you can't do a full backup, you can't do one. Nothing changes that other than deleting more data. But that isn't related to the data deletion.
If you need to recovery pre and post delete, you can do a diff, as George suggested, but it's not clear where you are having space problems and at what time. Perhaps you can outline the entire issue a little more clearly.
The issue is we don't have enough space on the backup drive because of this delete process makes transaction log backup files grow quickly and make the disk almost full.
We do keep two weeks backup: every Sunday full backup, daily differential, and 3 hours transaction log backup.
This data clean up job runs daily, so I don't want to make a full backup every day after I do the process, that also will make the disk space full .
November 28, 2011 at 3:49 pm
sqlfriends (11/28/2011)
Steve Jones - SSC Editor (11/28/2011)
I'm confused. What space are you trying to save? You started posting as though you were having log space issues with this delete. Then you stated you kept two weeks of backups on disk.Now you're saying that a full backup is too large?
It's not clear what the problem is. If you perform the delete, you do not have the same space in the full backup. If you can't do a full backup, you can't do one. Nothing changes that other than deleting more data. But that isn't related to the data deletion.
If you need to recovery pre and post delete, you can do a diff, as George suggested, but it's not clear where you are having space problems and at what time. Perhaps you can outline the entire issue a little more clearly.
The issue is we don't have enough space on the backup drive because of this delete process makes transaction log backup files grow quickly and make the disk almost full.
We do keep two weeks backup: every Sunday full backup, daily differential, and 3 hours transaction log backup.
This data clean up job runs daily, so I don't want to make a full backup every day after I do the process, that also will make the disk space full .
Then you'll have to accept one of three things:
1) Keep this testing server in simple mode to avoid necessary transaction log backups.
2) Don't backup daily and accept the fact that your log-chain is permanently broken and you can never point in time restore the testing database.
3) Review the reasoning and logic behind constantly writing and removing data in your testing environment, and re-write to avoid this problem.
3a) Get more space.
Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.
For better assistance in answering your questions[/url] | Forum Netiquette
For index/tuning help, follow these directions.[/url] |Tally Tables[/url]
Twitter: @AnyWayDBA
November 28, 2011 at 4:10 pm
sqlfriends (11/28/2011)
Thanks, but full backup also takes much space. Can I change it to bulk-logged then do the delete, then change back to full recovery mode, this way I don't need to do a full backup, and also it minimize the log size, is that correct?Thanks
No - switching the recovery models does not reduce the amount of logging on deletes. The amount of data logged will be exactly the same whether in simple, bulk_logged or full.
How the log is treated will be different - in simple recovery model only. In simple recovery model, as soon as a checkpoint is issued (manually - during your batch deletes) the VLF's that can be marked as reusable will be marked that way. In full recovery or bulk_logged recovery model - you have to perform a transaction log backup to mark the VLF's as reusable.
You can either switch to simple recovery - or you can modify your transaction log backups to backup to the NUL device. Just realize that either way, the log chain is broken and you cannot restore to a point in time - which makes running the transaction log backups rather useless.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
November 28, 2011 at 4:18 pm
[No - switching the recovery models does not reduce the amount of logging on deletes. The amount of data logged will be exactly the same whether in simple, bulk_logged or full.
Does this statement only apply to delete? How about insert or update, will the data logged smaller in simple mode than in full or bulk-logged?
Thanks
November 28, 2011 at 4:29 pm
sqlfriends (11/28/2011)
[No - switching the recovery models does not reduce the amount of logging on deletes. The amount of data logged will be exactly the same whether in simple, bulk_logged or full.
Does this statement only apply to delete? How about insert or update, will the data logged smaller in simple mode than in full or bulk-logged?
Thanks
It doesn't matter. There are tricks to incur 'minimal logging' that only can be used on the first insert into a truncated table, but in general it won't make a difference. The only thing that will reduce overall log usage is to reduce the # of rows per pass (transaction), and then allow for log file reusage of the VLFs, either via simple mode or backups.
Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.
For better assistance in answering your questions[/url] | Forum Netiquette
For index/tuning help, follow these directions.[/url] |Tally Tables[/url]
Twitter: @AnyWayDBA
November 28, 2011 at 4:50 pm
The very best solution to your problem would be to get more storage space. It is the lack of storage space that is causing you to jump through hoops.
If, however, that is not possible, the next best solution is to re-evaluate your backup plan. Currently, you take weekly full backups, daily differential backups and periodic log backups. Since you are retaining two weeks worth of backups on your test server, one way to save space on the backups might be to only take a full backup every two weeks (assuming that most of your data is static). Another alternative might be to eliminate the differential backups. Eliminating the differential backups would make restores more tedious and increase the likelihood of failure due to a corrupt log backup, though. (That is why this is the second option.)
Your final option, if neither of the first two is practical, is to restructure the deletion task so that it doesn't log so much data. You can structure your task to avoid the DELETE command (which is fully-logged) and use one of the minimally-logged commands instead (such as DROP TABLE or TRUNCATE TABLE). This, however, is a pain to do (and to manage). So, it should be your last resort.
In no case should you switch between recovery modes for this database. Eventually, something will get screwed up and come back to bite you. Rather than do that, you should just set the recovery mode to Simple and leave it there. Then you could put execute your deletes in batches of 1,000 records or so, and this would prevent your transaction log file from growing out of control. If that isn't practical, then you are left with one of the first three options.
November 28, 2011 at 5:02 pm
If it's just log backup space, then I'd do this:
1. Switch to simple mode
2. Delete in batches
3. Switch back
4. Take diff backup
5. Start log backups again
This seems to be possible:
CREATE DATABASE drtest
GO
USE drtest
GO
CREATE TABLE MyTable (msg VARCHAR(500))
go
INSERT MyTable SELECT 'Pre-Full'
go
BACKUP DATABASE drtest TO DISK = 'drtest_full.bak'
go
INSERT MyTable SELECT 'Pre-log'
go
BACKUP LOG drtest TO DISK = 'drtest_log_1.bak'
go
INSERT MyTable SELECT 'Archive 1'
INSERT MyTable SELECT 'Archive 2'
INSERT MyTable SELECT 'Archive 3'
go
BACKUP LOG drtest TO DISK = 'drtest_log_2.bak'
go
ALTER DATABASE drtest
SET RECOVERY SIMPLE
go
DELETE mytable WHERE msg LIKE 'Archive%'
GO
SELECT * FROM mytable
GO
ALTER DATABASE drtest
SET RECOVERY FULL
go
BACKUP DATABASE drtest TO DISK = 'drtest_diff.dff' WITH Differential
go
INSERT MyTable SELECT 'Post-log, post recovery switch'
go
BACKUP LOG drtest TO DISK = 'drtest_log_3.trn'
GO
USE master
go
RESTORE DATABASE [drtest]
FROM DISK = N'C:\SQLBackup\drtest_full.bak'
WITH STANDBY='drtest.sby', REPLACE, STATS = 10
GO
SELECT TOP 10 * FROM drtest.dbo.mytable
GO
RESTORE DATABASE [drtest]
FROM DISK = N'C:\SQLBackup\drtest_diff.dff'
WITH norecovery
GO
RESTORE LOG drtest FROM DISK = 'drtest_log_3.trn' WITH norecovery
GO
RESTORE DATABASE drtest WITH recovery
GO
SELECT TOP 10 * FROM drtest.dbo.mytable
GO
DROP DATABASE drtest
November 29, 2011 at 1:41 am
As its a test server and backup space on the backup drive is an issue I would keep the db in simple mode and take a differential backup post the delete if you need a recovery point at that point.
then you won't need to maintain any log backups at all.
---------------------------------------------------------------------
November 29, 2011 at 1:47 am
Then you'll have to accept one of three things:
1) Keep this testing server in simple mode to avoid necessary transaction log backups.
2) Don't backup daily and accept the fact that your log-chain is permanently broken and you can never point in time restore the testing database.
3) Review the reasoning and logic behind constantly writing and removing data in your testing environment, and re-write to avoid this problem.
3a) Get more space.
I tend to agree to Craig. You should setup your priorities. With limited resources in hand, you can't achive all.
November 21, 2012 at 1:44 pm
I wonder if something like this would work for you:
Use [Database]
BEGIN
DECLARE @count int
SET @count = 1000
while exists (SELECT top 1 [col] FROM [Table] where [whatever...]
BEGIN TRANSACTION
DELETE FROM [Table]
WHERE [col] IN (SELECT TOP (@count) [col] FROM [Table] where [whatever...]
COMMIT TRANSACTION
checkpoint
WAITFOR DELAY '[however long]'
END
The checkpoint after each batch will mark the VLF's as reusable (If I understand it correctly).
The WAITFOR DELAY can be used to help replication from getting flooded.
Viewing 11 posts - 16 through 25 (of 25 total)
You must be logged in to reply to this topic. Login to reply