August 7, 2008 at 11:24 am
I've posted about something similiar before and I got 1 response but I'm hoping this new and different scenario as well as posting to a different section of the Forums may result in more answers to this question/issue.
The Action: Automated SQL Server 2005 Maintenance Plan is calling a a job that Reorganizes and Rebuilds specific Indexes (not all indexes) in an application specific database.
The Problem: The Maint Plan/Job is pushing the Log File to it's max allowed size within a few minutes after the job begins. The Recovery Model was recently changed from Simple to FULL and then to Bluk Logged. While set to SIMPLE the job would run without error and the Lg File size had the same settings/space it does now. We don't have the space avaiable to simply set the Log file to gorw large enough to handle all that the Job will record to the Log file under Bulk-Logged Recovery Model. The job took about 2 hours before when Recovery was set to SIMPLE. To accomidate the amount of data being logged now it's set to Bulk-Logged within that same Log file would require more space than is realistic.
The Question: Aside from changing the Recovery Model to SIMPLE before the job begins and then back to Bulk-Logged after it completes, is there any method or option that will allow for changing how much detail/info is written to the Log file during the Jobs execution?
Thanks
YSLGuru
Kindest Regards,
Just say No to Facebook!August 8, 2008 at 7:18 am
No.
To keep the log file size down, you either need to use simple recovery or do log backups. In your case, I don't think log backups will be a useful way to go so I would recommend you change the recovery model to simple, run the process you have, reset it back, and do a full backup.
August 8, 2008 at 12:38 pm
SSComitted - Thanks for replying. So far all my inquiries (not just here at SQLServerCentral.com) have resulted in this same answer. You either have to backup the log file or siwtch to the SIMPLE Revocery model.
Backing up the log file may isn't an option because the Job that is blowing it up is performing enough work that the log file can't contain just the first few minutes worth before maxing out and it doesn't matter if the log file was backed up & truncated just prior to the job executing.
Bottom line is we have a job running that if we want to record to the log, detailed info at the BULK-LOGGED or FULL recovery model level, we have to provide the log file with more space.
Thanks again
Kindest Regards,
Just say No to Facebook!August 11, 2008 at 10:16 am
One last question for all regarding this issue.
How do large organizations with VLDB's (Very Large DB's) far larger then my 100GB database, how do they hanlde this kind of situation? How do they perform resource intense actions, those operations in SQL that write large amounts to the log file(s), without ending up with a log file in the Terabytes? Or is it that they simply have a large amount of space sety aside for the log file?
Based on what I know and have read abut the Recovery Model, no SQL Server user should be running a production/live DB under the SIMPLE recovery model unless they know and are OK with not being able to recover a failed database any more recent then the last full backup. So I'm assuming SQL Server users with VLDB's aren't using the SIMPLE Recovery model as a way to deal with this log file issue.
Thoughts?
Thanks
Kindest Regards,
Just say No to Facebook!Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply