April 14, 2004 at 6:40 am
Hello,
I have an application which is connected to SQL 2000 database tables, the application updates the database with data entered by the user over the network. Since the data is realtime, i need to keep a backup of the database , but at the same time i need if one the database server goes down, it is taken over by the second one, and when the first one recovers/returns the both the Databases should be synchronised.
what is the optimal solution to this problem and how it can be achieved.
Thank You,
Mufaddal
April 14, 2004 at 6:59 am
You are talking about Mirroring the data which is going to be a Yukon thing but not a SQL2K thing. I would go for log shipping. You have have the standby node, are constantly applying trans log to keep it up to date, in a disaster you can fail over to the log ship server (after bring the db up with recovery) and can point all data instances over...the changeover taking less than 5 minutes in most cases. You would then log ship back to the other server when it becomes available again.
If you are not talking about database recoverability, but rather server reliability then you would go for a cluster. This would cause the entire SQL Server instance to fail over to the secondary hardware in the cluster in the event of a hardware problem.
For a total scenario you should have both, this way you are covered for problems with hardware and instant failover with clustering, and for potential data problems with log shipping.
April 14, 2004 at 7:08 am
thx for the reply, can you explain me what exactly you mean by log shipping and how it can be achieved.
April 14, 2004 at 7:18 am
Logshipping can be applied a couple of ways, you can create scripts for it (there are plenty on this site for that purpose), or if you are using SQL Server 2K Enterprise Edition you can set it up through the maintenance plans.
Essentially here's how log shipping works...You have a database on server A, you want to make sure that you don't lose data in case of a server problem so you decide to institue log shipping. Firstly you have to check the recovery model of the database, for log shipping to work you have to be in Full Recovery. Then you would take the previous nights backup of the database and restore it on the remote server (B) with standby. You then, at set intervals during the day (be it 5 minutes or 2 hours) dump the transaction log on Server A to disk, this is essentially a file with a list of every transaction against that database during the time period since the last log dump. This file you then copy to Server B and restore to the database on Server B. This keeps the databases the same, the only difference between the two is the amount of time between transaction log dumps and restores. Should you experience a problem with Server A you do a final dump of the transaction log, which you then restore on Server B with recovery, which brings the database up into normal mode. You would then point your applications at this. Changing back to Server A would be a reverse.
April 14, 2004 at 7:38 am
this solution involves manual interaction, i mean user must copy this log files and restore the database,i need a complete automated solution, i.e as you mentioned yukon can provide me. but i want to design a system, which will update both the DB at the same time as well as if anyone goes down, it jumps to another one and when the other is up, it will transfer its data too.
the above procedure you mentioned, can it be automated some how and what if the transaction hitting the DB are realtime, i mean every second in which case i cannot stop the users from disconnecting, i am looking for a really real-time solution.
April 14, 2004 at 7:40 am
There is no really realtime solution at this point. What are you trying to acheive, safety from a data perspective or safety from a hardware perspective?
April 14, 2004 at 7:43 am
i am trying to have safety from both side, off which i have already taken care of the hardware side, by clustering the 2 machines. but i need a solution for the data side ( but which can be fully automated )
also can you tell me if i have a database on SQL 2K and i have taken a backup of that, and if my machine crashes for some reason, and i install a new SQL 2K , how do i restore the DB, is it the same procedure as a normal restore or any other way.
thx for all the replies.
April 14, 2004 at 7:49 am
There is no automated process to handover logshipping, and to be honest, if you have lost both servers within your cluster you have bigger problems than worrying about having log shipping be automated for recovery purposes. You could create scripts to do everything but you would still have to run those scripts and repoint your applications to the new server.
To restore the database on a new install is the same as on an existing install, just be aware of the potential to get orphaned users unless you add the logins back into syslogins with the same sid as in the current master database.
April 15, 2004 at 12:45 pm
If you want fully automated then you have to go with replication, and the application has to be set to look for the primary server cluster, and if it's not available then bounce to the secondary. The intervals / time lag is up to you, sounds like you need a two phase commit level concurrency.
You could script it to log ship, move the files automatically (xp_cmdshell or activex) and apply the logs to the secondary with norecovery - that keeps your secondaries ready but presents the problem of having to do a closeout log on the primary after it has failed (what a silly idea, it's DOWN) and applying that to the secondary with recovery. That would be trickier to script and do automatically. You can always back out the no recover and apply it with recovery if you can't get the last transaction log due to hard ware failure.
Having said that this is the hard way - that is what I do for my 245 db's on one of my servers. I can't use replication because our business model requires my folks to be able to add and drop tables and even new DB's without my assistance (and I don't want to be doing nothing but un-subscribing and re-subscribing all day). I'll share my scripts and sp's off line, but they would need modification to work (which is why I won't post them, too much work to make them generic). Send me a PM if you want them, and give me a day to get an email back to you.
To prevent nightmarish headaches from post failover recovery on the primary, leave the secondary databases in read only mode at first - that'll keep you from having to recover any changes to the primary when it comes up. If you can't do that then you'll have to back and restore during off hours before you can switch back to primary's.
Thanks, and don't forget to Chuckle
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply