May 4, 2009 at 12:32 pm
Hello Ville
Thanks for the nice explanation. Appreciated.
GK
May 7, 2009 at 4:24 pm
Ville Lucander (5/4/2009)
Hi,changing to Bulk-logged would not have helped in this scenario. Updating existing column values is always logged. You can use bulk-logged when using bcp, select * into table from table2, etc,,,
and all transactions are always logged even though you are running the db in Simple recovery model but the log space is marked reusable after each transaction finishes.
Ville
I have question here. The log file size per day in my db is expected to be approx = 68GB/day. (Input data comes thru ftp on an hourly basis). My current plan is to 1. backup Trn Log to disk every hour and 2. back it up to tape subsequently. Keep the logs on disk only for 2 days.
Backing up the log should release the space and reduce the log file size. As i found out BACKUP LOG dbname WITH TRUNCATE_ONLY is deprecated in SS2008; If space is not released, i am planning to introduce DBCC SHRINKFILE thru cron? Anyone out there loading large volume of data everyday?
I tested with Recovery Mode = FULL and the log file generated with Indexes in place seems to exceed 4.5 times the data file size. I am planning to test with BULK_LOGGING mode soon. Does BULK_LOGGING have any impact on Query response?
Would appreciate any inputs on this.
gk
May 7, 2009 at 4:59 pm
First, why shrink the log? If you load again tomorrow and it grows, what have you accomplished? Nothing.
Your strategy of backing up the log every hour is sound. You should set the log to be the max sized needed for the peak updates in an hour, add a little pad, and then leave it there. Growing the log is a relatively expensive operation, so you don't want to do it unless you really have to.
Instead, a backup of the log will mark the committed transactions as backed up and then that space is reused.
May 8, 2009 at 10:30 am
>First, why shrink the log? If you load again tomorrow and it grows, what have you accomplished? Nothing.
Upon second thoughts i agree with you. Currently in another Production environment where i ship logs to the standby i have scheduled shipping every 5 mins and that keeps the log from growing. But then the transaction volume is not high.
>Your strategy of backing up the log every hour is sound.
I think this should take care of log trimming.
>You should set the log to be the max sized needed for the peak updates in an hour, add a little pad
>, and then leave it there. Growing the log is a relatively expensive operation, so you don't want to do it unless you
> really have to.
I don't leave any of my datafiles open ; I always define the upper limit.
> Instead, a backup of the log will mark the committed transactions as backed up and then that space is reused.
This should replace dbcc shrinkfile for shrinking the transaction log.
I have another question. Apologies if this sounds trivial. For this Terabyte sized db which will be going to production in few months, i am wondering whether i should go for file groups. My original plan is to have partitions on file groups. But if i donot have any partitions coming up in the near future, should i keep it all as one mdf file or multiple mdf files? I have not worked on Larget SQL Server db's and this will be my first undertaking.
TIA
GK
Viewing 4 posts - 16 through 18 (of 18 total)
You must be logged in to reply to this topic. Login to reply