August 9, 2013 at 9:44 am
We are moving an application from one hosting provider to another. The boss lives in an ideal world where there is no downtime - ever. So I'm trying to figure out the best way to keep my databases synchronized between the sites, so that when the switch is made, there is no lost data or processing functionality.
Once everything is in place & working @ the new site, the database synchronization can be disabled, so I'm thinking a few weeks tops & not opposed to re-initializing @ the 2nd site during the transition.
The 2 databases are relatively small < 3Gb.
We're running SQL2k8R2 on W2k8R2 64-bit servers.
My 1st thought was to mirror them, no witness. But I'm also interested in log shipping or transaction replication as alternatives.
Does anyone have any war stories regarding this? Any recommendations?
Thanks
August 9, 2013 at 9:54 am
For such scenarios compbination of Log shipping and FTP will work.
Pramod
SQL Server DBA | MCSE SQL Server 2012/2014
in.linkedin.com/in/pramodsingla/
http://pramodsingla.wordpress.com/
August 12, 2013 at 10:05 am
There is no such thing as "no downtime, ever." The sooner your boss understands that, the more rational discussion you can have.
There will always be downtime, but you want to minimize it. If you use log shipping, you will have some delay in terms of how often you back up the logs and move them. That can affect your RPO in terms of a bit of data loss. It also affects your RTO, since you need to move clients to the new instance/database, run the new database through recovery (usually quick), and deal with any networking issues. If you are trying to protect yourself from data loss and local server loss, log shipping is great. It's simple, and more importantly, if you need to move the logs and backup to a new server because of some issue with your secondary site, you can do it.
Mirroring gets closer to no downtime, and no data loss, but it's more complex to set up, and you'd have to make sure you have a secure pipe and connection between the two instances. Your clients (if they are relatively new), can make the switch automatically to the new server/database without networking issues.
Replication can provide no downtime if the clients were to know to switch to the new server, but that can be complex. You also could have PK issues in trying to put new data onto the secondary server. It's the most complex, administratively, and the most likely to break. It's also the messiest in terms of the data merge, and the fail back.
August 12, 2013 at 11:01 am
You are so correct in there being no downtime. We will pick a time of day to minimize the impact to potential connections.
Sounds like log shipping is the easiest solution to implement. Now for the fun questions:
Do I need a dedicated/secure pipe for that to work?
Do I want a dedicated pipe? (None of the information is sensitive.)
What destination port(s) do I need to open at the 2nd site?
Thanks
August 12, 2013 at 1:16 pm
BigSam (8/12/2013)
You are so correct in there being no downtime. We will pick a time of day to minimize the impact to potential connections.Sounds like log shipping is the easiest solution to implement. Now for the fun questions:
Do I need a dedicated/secure pipe for that to work?
Do I want a dedicated pipe? (None of the information is sensitive.)
What destination port(s) do I need to open at the 2nd site?
Thanks
1. No
2. Perhaps. It's a bandwidth/timing/latency issue
3. It's a file copy. Whatever allows that with your method of file copy. You can use the SQL Server wizards, which I believe need the SMB ports (double check me), or you can use various scripting/xfer methods like SFTP.
Note that no matter what you choose, secure it. Limit the xfers to these specific machines on these ports with explicit rules. You never know when someone will start shooting sensitive information in your database. Or when info you don't consider sensitive gets classified that way.
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply