June 20, 2005 at 2:45 pm
We had a rather strange occurrence happen on one of our servers over the weekend. We have a maintenance plan setup on the server to run the optimization task early Sunday morning. We had been having problems with some of the transaction logs growing extremely large during the index rebuilds and putting the disk drive over 90% capacity. In an attempt to alleviate the problem, I created a job to set a few of the larger databases to BULK_LOGGED recovery before the optimization job and set them back to FULL afterwards. All worked as planned and I didn’t receive any disk space alerts. However, during the hourly transaction log backups one of the database’s log backups failed because there was not enough free space on the disk. The database in question was one of the databases that I had set to BULK_LOGGED and then back to FULL. The full database backup the night before was about 25GB. The physical log file is 10GB and when I checked it only had about 500MB used, yet the log backup failed four times in a row even though there was about 10GB free space on the disk drive that it backs up to, so I was confused on why it was failing. I thought maybe it needed to back up the entire 10GB, even though only a small portion was being used, it didn’t make sense to me why, but that’s the only thing I could think would be causing it to fail. So I cleared some additional space on the drive and ran the transaction log backup successfully, but the backup file was 25GB?!?!?! I don’t understand that at all, especially considering the physical log file itself is only 10GB. I double checked to make sure I backed up the log and not the database, but the size of the backup looks like it backed up the entire database into a TRN file? Any ideas what may have caused this? I don’t know if this matters, but the database in question is involved as a publisher in transactional replication and is also in a clustered instance.
June 21, 2005 at 1:22 am
I'm not sure if this applies to index rebuilds, but ....
When you load data (using BCP or BULK INSERT) into a database that is set to BULK_LOGGED, SQL Server only writes the page allocations to the transaction log (which makes BCP and BULK INSERT much quicker). But, when you back up the transaction log, it goes back to the database to retrieve the actual data in those pages in order to write them out to the log backup file. Hence, your log backup will be many times larger than the amount of space used in the transaction log.
I've also previously noted that index rebuilds and defrags produce transaction logs out of all proportion to the size of the index.
June 21, 2005 at 2:46 am
Michelle
I second Ian's thoughts on BCP or Bulk Insert operation. Rebuilding of indexes does exactly as BCP does.
Why do you have such large growth of transaction log, is because that SQL Server logs only extent allocation (8-page un<st1ersonName w:st="on">it</st1ersonName>s) instead of each row or page that's changed even when the database is switched to Bulk Logged.
Now coming to your problem, I had same situation with one of my databases. Here is the approach I had implemented -
In the optimization job, I included new steps to
(1) Take a backup of Transaction Log with no truncate of the large databases, which have huge after a rebuild of index
(2) The next step contains a full backup of the database to have database consistency
(3) And as the last step switch the recovery model from Bulk Logged to Full
Note: Here no hourly transaction log backups are scheduled during the run of optimization jobs.
In my shop, the activity on this database during the night hours is pretty negligible, and I had taken the risk of having no txn log backups when the optimization job is running.
You may need to implement accordingly based upon your recovery requirements.
Hope this helps.
Viking
June 21, 2005 at 6:53 am
Viking and Ian thanks for your responses. I attempted the solution of setting the databases to BULK_LOGGED before the optimization job to minimize the growth of the transaction logfile. Little did I know the next backup of it would create such a large file. I'm not sure how I will try to implement a solution as we are limited on disk space. I think I may just schedule the daily full backup of those databases after the optimization runs. That way I will have just a 25GB database backup file, not a 25GB database backup and a 25GB transaction log backup to worry about.
June 21, 2005 at 10:21 am
I think that where you need to sharpen the pencil is on the optimizations job. You could make it scripted so that you pick an choose what needs to be rebuilt and scater the tables for different days that way you can run it more often with less intensity
Just my $0.02
* Noel
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply