April 9, 2010 at 1:13 am
Hi,
I'm working in a production environment in DB maintenance. We have a database around 3 GB in size. The log file has been restricted to 1 MB growth upto 10 GB.
The log file is backed up on an hourly basis. Still there is a problem with increasing size of the log file.
It inflates upto whatever upper limit is defined and does not get truncated until and unless backed up again and truncated manually.
I'm facing an issue of disk space due to this...
Any suggestions please...
Thanks...
April 9, 2010 at 1:22 am
If it keeps growing, this means that your database is heavily hit by writes. You probably need to take log backups more frequently.
Growing the file by 1 Mb could lead to lots of grow events, that can slow things down. I suggest growing by at least 50 Mb.
-- Gianluca Sartori
April 9, 2010 at 1:29 am
Do you have many bulk operations? if so you can try changing recovery model to Bulk logged (After some reading on the topic).
April 9, 2010 at 1:30 am
Hi,
Thanks for your suggestion...
The point is, I have many other sites with the same application deployed and having far higher number of write events than the DB I am talkin about. But the transaction logs over there dont grow in such manner. The frequency of backups is the same.
Increasing auto growth by more amounts might help.
Is there any property to be set in the DB properties which could help the cause??
April 9, 2010 at 2:37 am
No... We dont have many bulk operations as well...
I mean there are jobs scheduled to update tables. But these jobs dont affect log files on other DBs where they run successfully as well...
April 9, 2010 at 2:44 am
There must be something that makes your log grow. Try tracing the activity with profiler, maybe you'll find the issue there.
-- Gianluca Sartori
April 9, 2010 at 3:36 am
Hi dude when the transactions will happens automatically the log size will grow,so on the time there is an option to shrink the log file,below are the command executed in u r db it will ok,
--dbcc shrinkfile('log file name',300)
if shrink is not allowing then
--backup log log_file_name with truncate_only
Regards,
Adapala.N
April 9, 2010 at 3:47 am
naresh_ee (4/9/2010)
Hi dude when the transactions will happens automatically the log size will grow,so on the time there is an option to shrink the log file,below are the command executed in u r db it will ok,--dbcc shrinkfile('log file name',300)
if shrink is not allowing then
--backup log log_file_name with truncate_only
Regards,
Adapala.N
Not a good idea at all.
DBCC SHRINKFILE can release unused space from the log file, but, if you are taking log backups regularly, this shouldn't free any unneeded space. It would only force log file growth.
BACKUP WITH TRUNCATE_ONLY breaks the log chain, that means you won't be able to take log backups until you take a full or differential backup. I don't think this is what the OP wants.
-- Gianluca Sartori
April 9, 2010 at 4:14 am
Why don`t you try to compare the trace of this database with the trace of the similar database for a fullday operation and see if you can`t get any malicious operation.
Also try to audit when this growth started and see what might have changed, e.g win patches, av, etc.
April 14, 2010 at 6:34 am
Take more frequent t-log backups and run a server side trace. See if you can match up activity in the trace with the same time that you have large t-log backup files.
Also, this topic has been covered 100+ times, so a search may bring up other useful threads to help you.
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply