July 6, 2010 at 6:29 pm
Looking for options. I'm considering the need to go to simple recovery mode during our weekly reindexing maintenance window. Our systems team handles backups with Commvault Idata agents. Log backups run every 15 minutes. This past Saturday we reached the point where a log backup took over 2 hours during reindexing and the log file grew by 35 Gigabytes, coming within 17GB of maxing out the log drive.
Aside from not having log backups during this window, the other variable is what commvault will do when the recovery mode is switched to simple and then back to full. The product seems to run in wizard mode and launches various types of backups on it's own when the recovery type is toggled.
July 6, 2010 at 7:17 pm
Indianrock (7/6/2010)
Looking for options. I'm considering the need to go to simple recovery mode during our weekly reindexing maintenance window. Our systems team handles backups with Commvault Idata agents. Log backups run every 15 minutes. This past Saturday we reached the point where a log backup took over 2 hours during reindexing and the log file grew by 35 Gigabytes, coming within 17GB of maxing out the log drive.Aside from not having log backups during this window, the other variable is what commvault will do when the recovery mode is switched to simple and then back to full. The product seems to run in wizard mode and launches various types of backups on it's own when the recovery type is toggled.
I would not recommend switching to simple - as that is going to break the log chain and could prevent you from restoring the system to a point in time after that break. To restart the log chain, you have to perform a full backup after you switched back to full.
I would recommend that you not reindex everything - and only reindex tables that need to be reindexed. Search for SQL Fool - she has a good script available to perform smart reindexing.
If that does not suffice, then you can consider changing to bulk_logged recovery model. This will allow for minimally logged transactions during your reindexing - but, your log backups will be just as large. And, you might not be able to recover to a point in time when the recovery model was set to bulk_logged.
And finally, if you need the log file to be 34GB (or even larger) to perform nightly maintenance then you should consider adding storage to make sure you have that space available. I have several systems where the transaction log is quite large just for these kinds of operations.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
July 6, 2010 at 7:39 pm
Switching to SIMPLE mode may not resolve your issues. SIMPLE mode only means the log space is made available for reuse after a transaction is committed. If you are reindexing lots of large tables, the changes are still going to be logged and the log file will still grow. If you go to simple mode, any log backups that run will fail.
You may need to configure your reindexing to not do large tables in parallel if you haven't already done this.
I don't know a lot about automating ComVault, but it sounds like this has been configured to make sure you always have a valid backup.
One of my issues with using 3rd party products like ComVault is the issue of synchronising processes like this. If you were only using SQL Native to do the backups, you could easily disable the log backups, put the db into SIMPLE mode, start your indexing, when done you shrink the log file, put the DB into FULL mode, do a full backup and enable the log backups, all in an SQL Job or SSIS Package.
Leo
Leo
Nothing in life is ever so complicated that with a little work it can't be made more complicated.
July 6, 2010 at 7:55 pm
I'm not happy that I can no longer do native sql backups and that our Systems Team now handles things with Commvault, but that's another topic. I have seen that reindexing and/or update statistics with fullscan following recent major data loads was kept under control by going to simple recovery -- the log did not grow a tremendous amount. We do use one of the common stored procedures that checks index fragmentation, then either rebuilds or reorganizes depending on the level of fragmentation ( I believe the "online" option is being used ).
Normally the contents of the log file are minimal and it has been sized at about 120GB. This last Saturday it filled that up and autogrew 35GB. So I suppose adding log space is one option. Or some modification to spread the reindexing over a larger period of time so the log backups can keep things tidy. We normally do our full backup on Sunday anyway, so if Commvault would seamlessly do a full upon resumption of full recovery Sunday morning that would work. At any point up until turning on simple recovery, we could restore using the prior weekend's full backup, plus previous night's differential plus log backups to the point desired.
July 7, 2010 at 5:58 am
It's not the size of the log backups that concern me since they go to separate disks, it's the size of the log file itself.
According to this msdn article ALTER INDEX REBUILD is minimally logged in either bulk logged or simple recovery. ALTER INDEX REORGANIZE is fully logged regardless of recovery mode.
July 7, 2010 at 7:39 pm
The issue with switching to SIMPLE recovery model is that you break the log chain. By doing that, if your current backup is corrupted - you will not be able to restore a previous backup and apply all the transaction logs to the current point in time. You would only be able to apply the transaction logs up to the point where the log chain was broken.
If you change to BULK_LOGGED, you can apply all of the transaction logs up to the current point in time because that does not break the log chain. However, you may not be able to restore using the STOP_AT parameter for a transaction log backup that was taken while in this recovery model. You would have to apply the full transaction log if there are any minimally logged transactions in that log backup.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
July 7, 2010 at 10:04 pm
If you concern about the size of the log file, consider using one of the following methods to prevent the transaction log files from growing unexpectedly:
1. Back up the transaction log files regularly to delete the inactive transactions in your transaction log.
2. Design the transactions to be small.
3. Make sure that no uncommitted transactions continue to run for an indefinite time.
4. Schedule the Update Statistics option to occur daily.
July 8, 2010 at 5:10 am
If you concern about the size of the log file, consider using one of the following methods to prevent the transaction log files from growing unexpectedly:
1. Back up the transaction log files regularly to delete the inactive transactions in your transaction log.
2. Design the transactions to be small.
3. Make sure that no uncommitted transactions continue to run for an indefinite time.
4. Schedule the Update Statistics option to occur daily.
Good general advice. In this scenario it doesn't quite apply (I have the same occuring on one of my test boxes). Point 1 is generally true, but when you are reorganising all of your tables (and therefore many gigs of data) you will find each transaction log backup takes longer than usual and will take up xgb of space. In the meantime the log file fills up because the speed of the log backup cannot mark the inactive transactions quick enough.
2 is the correct way to go in this case (as mentioned above) in that you need to split out any large tables having the reorg against into batches (so not all on one night/in one batch).
4 would not help in this case (other than giving you the everyday performance boosts required on a large database). And as you hopefully know - you should adjust this so that it doesn't run a full scan (this can take ages and probably would on your server if its anything like mine with the same reorg issues)
July 8, 2010 at 5:42 am
Thanks for all the feedback. The first thing I have to do is run a test to see how Commvault reacts to switching the recovery model. I don't want it to launch a full backup as soon as my job goes to simple or bulk logged ( during reindexing ). Since the normal scheduled full backup with commvault is Sunday at 6pm and the reindexing maintenance is 7pm Saturday, I also don't want commvault to launch a full bak when full recovery is resumed early Sunday morning.
It may be that we will need to explore doing portions of this maintenance on separate days but since many in our company consider it to be a 24/7 database, that may be tough.
As far as update statistics, we've found that anything other than fullscan doesn't get the job done on tables/indexes with millions of rows. You wind up with execution plans where sql is using the wrong record count to optimize queries.
July 8, 2010 at 6:09 am
From my experience (admittedly limited - just the past 8 weeks) with Commvault, it's very good at detecting when the log chain has been broken, and will take full backups automatically.
Other options you might consider are:
a) Not perform index rebuilds/reorgs, unless you've got evidence that performance degrades. Replace with full statistics rebuilds. They will still take time, but won't take up significant log space. I recently had a client that rebuilt indexes due to a vendor recommendation. Due to I/O performance, they could not back-up the logs fast enough. Analysing their performance over time resulted in no massive degradation in performance, so a manual outage every 6 months to perform full index rebuilds is more than sufficient and won't break their DR plan weekly.
b) Don't reorganise. Rebuilds are much more consistent in the size of the log. Of course, this could be a massive difference - a reorg of a 25 GB index might perform 1 GB of log activity, or it might do 25 GB. If you're expecting 25 GB, you won't be shocked.
c) This one is harder when using Commvault, but try making your index operations sequential. Rebuild one index, wait until the log backup occurs (finishes), then rebuild the next. Without Commvault, you could issue log backup commands. If reorganising, you can stop the operation every 15 minutes and wait for the log to clear out, and then re-commence. Much harder to script this, but possible.
July 8, 2010 at 6:35 am
I'm not very happy with commvault's inflexibility. Some of that may be due to the System Team's lack of experience with the product and with sql server terminology/concepts. Then you add the layer of interacting with sql staff who have no access to commvault and it's a comedy of errors.
We use a stored procedure that checks index fragmentation before deciding whether to rebuild or reorganize indexes ( or neither ). Update stats with fullscan follows reindexing in the same job and is set to quit after six hours since Sundays are technically the Systems folks window to perform updates etc.
July 8, 2010 at 8:17 am
I would encourage you to get access to commvault. We have a similar situation in that systems manage all backup jobs etc through commvault but us SQL guys still have access to get in and do adhoc restores etc. The menu is pretty straight forward etc once you know what to do. Surely its worth building a bridge with systems to include you in the fun ? 🙂
Viewing 12 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply