August 6, 2012 at 4:31 pm
We have database maintenance procedures which, in addition to performing backups, remove records older than a user-specified time limit in order to lower the size of the database files on disk. I have a number of questions regarding improving the operation of the "pruning" tasks in regard to the transaction log. The database recovery model is currently set to Simple.
First, there is a requirement to allow the pruning/delete operations while the database is fully user-accessible. As a safety measure, I wrap these operations in a transaction. Is this the best approach? Should I be adamant that the database be placed in single user mode or taken offline?
Second, I find that the transaction log grows from about 3K to 60GB after the pruning is performed on a 4 GB database. I've tried batching the deletes into groups of around 10,000 records, and this improved matters, but the transaction log still ended up at around 40GB.
Here is a general outline of how the deletions proceed:
1. Delete from sub-table where timestamps in master table records are older than X.
(This is due to integrity constraints.)
2. Delete from master table where record timestamps are older than X.
3. Delete from other sub-tables where foreign key in master table is no longer found.
4. Delete from further sub-tables where timestamps are older than X.
I am using sub or nested queries in steps 1 and 3 (delete from sub-table where record ID in select... from master), (delete from sub-table where not in select ... from master).
Is there a more transaction-log-friendly way to go about this?
If the transaction log growth after deletion is expected behavior, I have noticed that I can back up the database after pruning and then shrink the file with DBCC SHRINKFILE.
Further questions:
1. The prune/backup/shrinkfile approach just mentioned above would necessitate locking down the database (single user mode or offline) to prevent the loss of user transactions during this set of operations, correct?
2. Would it be better to employ a Full Recovery model and start backing up/shrinking the transaction log instead?
Thank you for any advice.
August 6, 2012 at 4:55 pm
One, you really don't want to shrink your database on a regular basis, it will result in both index and file fragmentation will affect performance. regarding the transaction log, may I suggest reading this article:
http://www.sqlservercentral.com/articles/Administration/64582/
August 6, 2012 at 9:10 pm
Shrinking Log file is not recommended until you run out of drive space. If your disk can hold 60GB log file, you can leave as of now.
Also when the pruning activity happens? Off -peak time?
August 6, 2012 at 9:40 pm
You might consider looking into partitioning. Partition your table so that the rows you want to get rid of are in their own partition then use SWITCH to put them into a whole different table, then drop that table. I've read about doing this several times although right now I can't think where. You should be able to find it easily enough. You might consider looking at Kimberly Tripps blog, she does a lot on partitioning.
The last half of this article seems to do what you are looking for.
Kenneth FisherI was once offered a wizards hat but it got in the way of my dunce cap.--------------------------------------------------------------------------------For better, quicker answers on T-SQL questions, click on the following... http://www.sqlservercentral.com/articles/Best+Practices/61537/[/url]For better answers on performance questions, click on the following... http://www.sqlservercentral.com/articles/SQLServerCentral/66909/[/url]Link to my Blog Post --> www.SQLStudies.com[/url]
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply