September 28, 2011 at 12:03 am
Comments posted to this topic are about the item How to Setup Log Shipping for Multiple Databases
September 28, 2011 at 4:19 am
Hi Brian,
This is quite interesting while reading :-), need to implement it on my test env before I can comment anything....but its sure lot of efforts had been put by you to achive this..thanks for sharing the idea and the implementation method.
EDIT:
Needless to say this has gone into my briefcase 😉
September 28, 2011 at 7:27 am
Good article. I have one comment/question. It looks like this solution can only handle databases with one data file. How would handle a database with multiple data files?
Jack Corbett
Consultant - Straight Path Solutions
Check out these links on how to get faster and more accurate answers:
Forum Etiquette: How to post data/code on a forum to get the best help
Need an Answer? Actually, No ... You Need a Question
September 28, 2011 at 7:42 am
Jack,
I haven't worked that out yet. I'm fortunate to not have to deal with that issue yet. I did find an article that looks like it might provide a solution or at least point you in the right direction.
Article
http://www.sqlbackuprestore.com/backuprestorehistorytables.htm
Thanks
September 28, 2011 at 7:45 am
Thanks jshailendra.
I have been running with this in production for about 6 months now without issue.
September 28, 2011 at 7:59 am
How's your OS memory usage on the two servers? Spawning 130 cmdshell processes on both ends seems excessive.
September 28, 2011 at 8:09 am
Hi Brian,
Just curious- if all 130 DBs reside on the same server why you did not consider using clustering?
Thanks, Yuri
September 28, 2011 at 8:13 am
bret.lowery,
Memory and CPU usage is much better than it was when I had all 130 DBs running from separate jobs. That's what I like about this solution, only 1 execution is running at a time.
Thanks
September 28, 2011 at 8:16 am
Yuri55,
I am using clustering, 1 active, 1 passive. The purpose of log shipping for me is to have a secondary location that the data exists at in case the cluster fails. I also use this as a location where our account managers can gain read only access to the data.
Thanks
September 28, 2011 at 8:27 am
Make sense.
I have though 1 more question-
your backup job runs every 15 min (on primary server) and deals with 130 DBs,
i.e. avg backup time should be around 7 sec (if my calculation is correct).
That means your solution works for small (size) DBs only.
Or am I missing something? Thanks
September 28, 2011 at 8:37 am
Yuri55,
Yes, some of my databases are on the smaller side but the solution is still sound for any size DBs.
We are backing up transaction logs and it is the volume of modifications made to the data that determines the size of the transaction logs and thus the time it takes to generate the transaction log backup.
As you have alluded to, you need to determine your average transaction log backup time across all of your DBs to determine what interval you should use for the backup, copy, and restore jobs.
Excellent question!
Thanks
September 28, 2011 at 8:39 am
Yes, you are right- one needs simply to adjust backup interval depending on DB size. Thanks
September 28, 2011 at 8:43 am
Yuri55,
But it's not database size that determines the interval, it's the transaction log size.
Thanks
September 28, 2011 at 8:50 am
Very interesting article, Brian. I will add this one to my briefcase, too.
September 28, 2011 at 8:52 am
Jon,
Thanks!
Viewing 15 posts - 1 through 15 (of 42 total)
You must be logged in to reply to this topic. Login to reply