March 23, 2005 at 2:12 pm
As I gear up for log shipping here, I need to deal with the one LARGE transaction log backup that occurs following reindexing and full backup. I didn't want to have log shipping running every 10 minutes where it had to copy one 10GB file over the network before moving on to the next tran log backup.
After doing a one-time shrinkfile on the trans log, I used the code below which gave me a final trans log backup of about 280KB. Seems like some of these steps may not be necessary, but they do appear to be working.
Randy Petty
use convqa
go
-- don't allow data changes during this process
alter database convqa set single_user with rollback after 10 seconds
go
-- do full backup and tran log backup
exec master.dbo.usp_dobackup 'convqa','F:\DATABASE\BACKUP'
go
backup log convqa with truncate_only
go
alter database convqa set recovery bulk_logged
go
-- reindex all tables
exec sp_MSforeachtable @command1 = "dbcc dbreindex ( '?','',90 ) "
go
alter database convqa set recovery full
go
-- TRUNCATE LOG BECAUSE WE DON'T NEED REINDEXING TRANSACTIONS
backup log convqa with truncate_only
go
-- then regular full backup and tran log backup occurs again
exec master.dbo.usp_dobackup 'convqa','F:\DATABASE\BACKUP'
go
alter database convqa set multi_user with rollback after 10 seconds
go
March 23, 2005 at 3:18 pm
Might be a good plan, if you're not doing log shipping. The problem is that as soon as you do a truncate on the log, your transaction log backup chain is broken, and therefore, log shipping is broken. In order to reinstate log shipping, you'll have to restore the full backup that you took at the end of your process on your standby server. I'm assuming that it probably would be less painful just to ship that 10GB log file.
Another option would be to selectively reindex rather than reindexing the entire database. There are stored procedures out there that allow you to specify "only reindex if scandensity is less than 80" for example. I have written one that takes into account the scandensity, and the size of the table to produce a "reindex factor". Then I can specify a minimum scandensity and a percentage of the database to reindex with each run. Something like this would probably help to reduce the log size, while still keeping your indexes in good shape.
Steve
March 23, 2005 at 6:25 pm
Thanks I'll check out the "selective reindex" approach. I'm currently running a test, copying a transaction log backup of about 10GB from our SAN to another sql server -- 18 minutes and counting. Does this mean the log shipping interval can be no smaller than whatever this copy time is?
Randy
March 24, 2005 at 4:14 am
Are you using MS log shipping, or a home-grown variation?
If is it home-grown, ehre's a thought for you - consider compression of log file before copying, and then decompression afterwards.
I have been able to write SP's to compress - bascially xp_cmdshelling a winrar command string, so writing an uncompress shouldn';t be impossible.
Given the impressive decrease in size (I've seen 20 GB backups reduced to 2 gb compressed), this *might* be feasible. HOWEVER:
(a) it introduces another point of failure (crc or some compression error)
(b) requires external product (please note you'll need winzip 9 for 2gb+ files, in my experience, or else winrar (not sure of versionI used)), and these would probably need to be on target and source server...
(c) duration of apply increased by compression and decompression time, and compression actvity = resource usage on "source" server
... more thoughts?
March 24, 2005 at 6:48 am
I'm planning to use the sql 2000 Enterprise DB maint plan "wizard" to get log shipping started. Then I'll look into the advantages of doing my own scripting. Your suggestion is interesting, but I'd like to avoid the compress/decompress if possible.
I found a stored procedures that do either dbreindex or indexdefrag based upon a certain level of fragmentation, but I'd like to find Steve's which takes table size into account also.
Setting the database into bulk_logged recovery mode during reindexing keeps the live tran log size down, but the transactions seem to wind up in the next tran log backup anyway, so that isn't really the answer.
I'm not sure what happens if log shipping is set for a 15 minute inteval and it takes 30 minutes for one of the tran log backups to copy to the standby server -- planning to test that.
March 24, 2005 at 8:46 am
rpetty,
I can't answer with regards to Microsoft's canned log shipping. What I can tell you is that I am using "home-grown" log shipping. Log backups occur every 15 minutes, and during heavy use times, the logs can grow quite large. During these times, it can take up to a couple hours to ship one log over the network, and obviously, log shipping gets behind. In my case, these heavy use times generally only last about 30 minutes during the day, so at most 3 large logs, and then it drops back down to a couple hundred kilobytes. So, I may be behind for several hours each day, but it eventually gets caught up.
As far as my reindex procedure is concerned, I doubt you'll be able to find it out in cyberland. The procedure is so large that I don't post it in forums. However, I would be glad to send it to you if you will "private message" me.
Steve
March 24, 2005 at 8:47 am
I'm on the same page - Steve's SP sounds interesting...
I've toyed with the idea of creating a maintenace database that keep track of statistics, last times, tables izes, number of indexes, critical tables, etc. and a system that would use that to determine pro-actively what maintenace to do, etc.
But as always, faced with the swamp of URGENT, the IMPORTANT always gets shoved aside
March 24, 2005 at 8:56 am
Wanderer,
That's exactly what I have! My "DBA" database holds all my maintenance procedures as well as supporting tables which control my backups, reindexes, configuration monitoring, etc. It works absolutely fantastic, and makes it easy to implement all the "DBA" stuff on a new server! Take it a little bit at a time. Create the database, and then as time allows, migrate/create your utilities in it. You'll find its a real time saver in the long run.
Steve
April 16, 2007 at 9:11 am
Does anyone have an idea if Log-shipping will propogate the ReIndexing on the destination server? For example, if we're reindexing before shipping the log, will those changes be seen on the destination server, or will we have to re-run the Reindexing there as well?
April 17, 2007 at 4:45 am
reindexing actions are stored in the tran log so yes the effects are seen on the standby. If it goes in the log it happens on the other side.
If the elapsed copy time is longer than the log shipping interval it will catch up next time, no problems, as log shipping records the last logs successfully copied and loaded.
You could amend the schedules so tran log backups are taken more frequently during the reindexing so individual log backups are smaller.
---------------------------------------------------------------------
April 17, 2007 at 7:10 pm
Sorry, I should have been more specific. Do the logs still contain Re-indexing transactions if the Recovery Mode is set to Bulk Logged?
April 18, 2007 at 3:33 am
We have a similar problem with large log files that are being shipped - we are contemplating compressing the files using xp_makeCAB and xp_unpackCAB. Does anyone know about these SPs as they are undocumented and no decent hits from Google about them?
Cheers
The Aethyr Dragon
Cape Town
RSA
April 18, 2007 at 8:40 pm
In regards to running the "selective reindexing" script to reduce the size of the shipped transaction logs, there is a script you can run daily to reindex the most fragmented tables in the database. It does not do a complete reindex unless you tell it to, but reindexes over a week’s time a little every day. The script is not a Microsoft supported tool but was developed by a senior Microsoft engineer and is publicly available at http://www.momresources.org/momscripts/scdw_reindex.zip
Please read the Readme for prerequisites. The script was developed for the MOM reporting datawarehouses, but can be used on any database.
The important part is the @PeriodLength=1 value in the SQL command. If you omit that and schedule a job to run the query, it divides all index maintenance tasks over a 7-day period with a near equal as possible load each day. If you use @PeriodLength=1 it will run the reindex job all at once.
Viewing 13 posts - 1 through 12 (of 12 total)
You must be logged in to reply to this topic. Login to reply