April 21, 2011 at 2:13 am
Hello,
I have an development database which I need to refill with production data on a regular basis. In order to have the development database not to big I created a script which will fill each table with his own conditions. The production database is around 700GB. I managed now to reduce the development to 60GB. What I see is that the log file of the development database is about 29GB. when the script is performing the insert into command for very large tables the log file grows enormous (to 29 GB).
Is the bulk-logged option a good idea or should I use the simple recovery model. I also have been reading things like setting about truncate log at checkpoint. I good do a checkpoint after each insert. Will this bring something? Or should I switch recovery model after each insert statement in order to free the space?
regards,
Bennie
April 21, 2011 at 3:02 am
dont know you re policies about restoring development databases. But i can understand its not needed to be point in time restore. Also depending on youre how often you do the bulk import but the Bulk-Logged Recovery model only allows the database to be recovered to the end of a transaction log backup when the log backup contains bulk changes. Point-in-time recovery is not supported.
So i would say simple if you dont need to point in time restore or do refresh often with bulk data
April 21, 2011 at 3:52 am
Hi,
Thanks for the fast reaction. We don't need a restore point at all. If needed I will recreate the development again (takes about 2 hours). We will refresh this development about 4 or 5 times a year. I will put it on simple. I assumed that on recovery model bulk-logged there would be less in the logfile. But since I only do inserts this will have no advance like I expierence now. (correct?)
regards,
Bennie
April 21, 2011 at 4:01 am
BenniePielaat (4/21/2011)
Hi,Thanks for the fast reaction. We don't need a restore point at all. If needed I will recreate the development again (takes about 2 hours). We will refresh this development about 4 or 5 times a year. I will put it on simple. I assumed that on recovery model bulk-logged there would be less in the logfile. But since I only do inserts this will have no advance like I expierence now. (correct?)
regards,
Bennie
recovery model bulk-logged there would be less in the logfile: is true compared to full as bulk transactions wont be logged. Not compared to simple
But since I only do inserts this will have no advance like I expierence now. (correct?) correct
simple is the way to go for you. production is something else off course
April 21, 2011 at 4:34 am
For starters, a 29 GB log file for a 60Gb database, considering all of the transactions that are occurring during your refresh, is not "enormous".
Bulk logged does not always keep the transaction log "small". There are certain operations that are minimally logged under this recovery model, so the log may still grow.
From BOL:
The following operations, which are fully logged under the full recovery model, are minimally logged under the simple and bulk-logged recovery model:
Bulk import operations (bcp, BULK INSERT, and INSERT ... SELECT * FROM OPENROWSET (BULK...)). For more information about when bulk import into a table is minimally logged, see Prerequisites for Minimal Logging in Bulk Import.
SELECT INTO operations when SELECT INTO creates a new table in the default filegroup. For more information, see SELECT (Transact-SQL).
Partial updates to large value data types, using the .WRITE clause in the UPDATE statement when inserting or appending new data. Note that minimal logging is not used when existing values are updated. For more information about large value data types, see Using Large-Value Data Types. For more information about the .WRITE clause, see UPDATE (Transact-SQL).
WRITETEXT and UPDATETEXT statements when inserting or appending new data into the text, ntext, and image data type columns. Note that minimal logging is not used when existing values are updated. For more information, see WRITETEXT (Transact-SQL) and UPDATETEXT (Transact-SQL).
Setting the "truncate log on checkpoint" is a depreciated feature, leave it alone.
Set the database to simple recovery. In a testing environment, if disk space is an issue, ypu may need to shrink (Never in prod!!!) the databse.
Michael L John
If you assassinate a DBA, would you pull a trigger?
To properly post on a forum:
http://www.sqlservercentral.com/articles/61537/
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply