November 28, 2011 at 9:40 am
I have a maintenance task that cleans a table daily on a testing server. Because it is a deletes some transaction data we don't need, it creates big transaction log file and makes the drive almost full.
What is the best way of doing this without using much space, I don't need the transaction log for this process.
Thanks.
November 28, 2011 at 9:53 am
sqlfriends (11/28/2011)
What is the best way of doing this without using much space, I don't need the transaction log for this process.
There's no way to ignore or discard the transaction log. Take t-log backups for each batch.
-- Gianluca Sartori
November 28, 2011 at 10:01 am
This has an impact on replication as well.
I would set up a 5 (or less) minute transaction log back up and rather than deleting all the rows in one go. delete them in "iterative sets" using a while loop.
I'd probably get the min and max id's if there are any and delete in batches of 10,000 with a delay inbetween. This should help as the transaction log is being managed whilst the deletes are ongoing.
November 28, 2011 at 10:06 am
MysteryJimbo (11/28/2011)
This has an impact on replication as well.
Correct, but I don't see replication mentioned anywhere on this thread.
-- Gianluca Sartori
November 28, 2011 at 10:12 am
I don't use replication on this server.
Can I use change recovery mode to simple, then after the delete batch processed, change it back to full recovery mode?
Thanks
November 28, 2011 at 10:24 am
You can do that, but make a full backup after getting back to full so you can keep your recovery chain intact.
You can batch delete, and run log backups as well.
If you delete 1mm rows, and that creates 10GB of log records, you can't get away from that.
However you can do this:
Option 1:
delete 1mm rows at once
- need 10GB log space,
- need 1 10GB log backup space
Option 2:
delete 50k rows, then run log backup
repeat 20 times
if each log backup is now 500MB, I now need:
- 0.5GB log space (+ some pad)
- 10GB log backup space, broken into 20 500MB files.
November 28, 2011 at 10:44 am
I realize I should have said I don't have enough space on the backup drive, for Log drive it is fine now.
So how can I reduce the log backup file size?
If I change it simple then full, I need to do a full backup, that will still use a lot of space.
Can I change it to bulk logged mode, then do the delete batch, then change it backup to Full.
This way I guess I don't need to do the full backup after that, and the log backup will be smaller size?
November 28, 2011 at 11:52 am
sqlfriends (11/28/2011)
I realize I should have said I don't have enough space on the backup drive, for Log drive it is fine now.So how can I reduce the log backup file size?
If I change it simple then full, I need to do a full backup, that will still use a lot of space.
Can I change it to bulk logged mode, then do the delete batch, then change it backup to Full.
This way I guess I don't need to do the full backup after that, and the log backup will be smaller size?
First, since this is a test/development system - is there a reason you need to keep the database in full recovery model? I would recommend switching to simple recovery model and not worrying about backing up the transaction log.
Second, switching recovery models will not have an effect on how much is logged during a delete. What changes is when the VLF's are marked as reusable - which in simple recovery will be after a checkpoint is run and in full/bulk_logged will be after a transaction log backup.
Switching to simple recovery, then deleting in batches (with a checkpoint issued after each batch) will reduce how large the transaction log will get. However, it will not reduce how much is actually logged - it just allows the space to be reused without having to perform a transaction log backup first.
Again, I would recommend switching to simple recovery and not switching back since this is a test/development system.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
November 28, 2011 at 12:05 pm
Thank you, this is a test environment, but it mimics the production server, so we still want to keep the backups, although we keep only two weeks backup on disk vs production 4 weeks.
So on production box, we also have the process, but it has more space than testing server drives, so we don't have this problem.
So I tried to find a way to do this on testing but really don't want to keep it in simple recovery all the time.
November 28, 2011 at 12:13 pm
Gianluca Sartori (11/28/2011)
MysteryJimbo (11/28/2011)
This has an impact on replication as well.Correct, but I don't see replication mentioned anywhere on this thread.
I know its correct, thats why i said it. I do have replication and this is how I get round the logreader latency by spoon feeding it.
By adding a delay and batching up the commands it gives the option to manage the transaction log backups onto a different drive or transfer them over the network.
November 28, 2011 at 2:03 pm
If you aren't overly worried about recovery, then switch to simple, and delete in batches, which will limit log file size. Then switch back to full, make a full backup, and you can continue log backups, without having the deletes included in the log backups.
November 28, 2011 at 2:16 pm
this is a test environment, but it mimics the production server, so we still want to keep the backups, although we keep only two weeks backup on disk vs production 4 weeks.
I understand the value of having your test system mimic your production system as much as possible. As I understand it, here is your problem.
You have a process which deletes a large number of records from a table (or tables). Since deletions are fully logged in SQL Server, this causes your transaction log file to grow until you run out of storage space. Running the deletions in batches (with Transaction Log backups between the batches) will keep your transaction log file at a manageable size, but increases the size of the backup files, and you are limited in storage space for the backups as well.
No doubt about it, it is a pickle. If you want the transaction to be fully logged, then perhaps reducing the length of time you store your backup files would give you enough room. Since this is a test system, having one week of backups instead of two is not going to break your Service Level Agreement.
Of course, you might run into this problem in production one day. So, you might want to find a solution now. (That is why you have a test system, after all.) Assuming you don't need these deletions logged for a point-in-time restore from backup, then perhaps you could copy the data you want to keep into a new table, drop the old table, and then rename the new table with the same name as the old table.
A variation on this solution is to TRUNCATE the table to remove all data. TRUNCATEs and DROPs are logged (all SQL operations are logged), but they are not fully logged like DELETEs, so they won't cause your transaction log files to increase in size dramatically.
I would advise against switching the recovery level between simple and full. This has a lot of effects, and it could well come back to bite you [you-know-where] some time in the future. Switching between bulk-logged and full is not as bad, but I don't think the bulk-logged recovery model actually logs less data for DELETEs than the full recovery model.
It would be nice if SQL Server had a BULK DELETE command akin to the BULK INSERT command, but it doesn't.
November 28, 2011 at 2:16 pm
if you want to save space on your backup drive, after switching back to full, take a differential backup and then start your log backups again
---------------------------------------------------------------------
November 28, 2011 at 2:16 pm
Steve Jones - SSC Editor (11/28/2011)
If you aren't overly worried about recovery, then switch to simple, and delete in batches, which will limit log file size. Then switch back to full, make a full backup, and you can continue log backups, without having the deletes included in the log backups.
Thanks, but full backup also takes much space. Can I change it to bulk-logged then do the delete, then change back to full recovery mode, this way I don't need to do a full backup, and also it minimize the log size, is that correct?
Thanks
November 28, 2011 at 2:19 pm
posts crossed - in case you miss it
if you want to save space on your backup drive, after switching back to full, take a differential backup and then start your log backups again
---------------------------------------------------------------------
Viewing 15 posts - 1 through 15 (of 25 total)
You must be logged in to reply to this topic. Login to reply