Replication / Mirroring Topology

  • Hi

    I need to achieve the following, but im not entirely sure on the best topology - could someone please advise?

    I have a production server which is updated every minute, a development server, an on-site failover and an offsite failover. The On-site failover box is in case my prod server dies for whatever reason, and my off site is part of my business continuity planning strategy in case my building blows up. These 3 boxes need to be syncronous - the Dev box can be a day out of date if necessary, but It would be good if it were also sync'ed. Live data can ONLY go into one prod server at a time ie the current prod server.

    I have tried to get replication going and whilst its topology supports my requirements ie one publisher and multiple subscribers,  it is INSANELY painful to get going, and I dont hate myself that much. I'm currently evaluating mirroring and whilst it looks pretty good and is not giving me another ulcer to match the one that rep gave me, it seems its only a 2-server process ie primary and mirror so it would not seem to fit into my model..

    Im thinking Mirror the prod server to the Disaster Recovery site (DR) and have the on-site failover and dev updated nightly from scripted backup / restores but this obviously leaves my on-site with out-of-date data and is also cludgy as hell.

    so what do people suggest? ultimately I want to mirror to two mirror servers, but I dont think the process supports multiple end-points. what about failover clustering?

    Any idead would be greatly appreciated.

    Thanks

    Alastair Jones.

  • just remember that if you mirror you primary to your off site server then on any failure of yor primary then the off site will pick up the load rather than the on-site secondary.

    to keep it simple how about mirroring to your on site standby and log ship at 5 minute intervals to your dr site??? - saves using replication - you can also create snapshots of your "mirror" so that you can use it for reporting purposes without affecting your primary server.

    don't forget to allow for use of a witness as well.!!!!

    your other option might be a failover cluster on site which mirrors to the "off site"

    which might work quite well (hopefully while your cluster fails over the off site version will come into play and then revert back to the cluster when the 2nd node failover completes)

    MVDBA

  • I reindex everything over the weekend which I think would probably mess up the trans log shipping if im shipping over a wan - what about if I do it the other way around? - log ship to the local failover and mirror to the DR?

    if my local primary fails I can work from the local failover no problem and as both devices are connected via gig eth, the reindexing shouldnt be a problem. The mirror would obviously be down if the local primary is down.

    I would then need to get the data back into the local primary from the local failover to get it mirrored over to the DR site...?

    backup / restore?

  • same problems with log shipping as with mirroring - they're effectively the same, so your re-index will do the same thing to your wan.

    again - if you mirror over the wan the client will redirect to the off site server rather than your on site server, negating the point of having the on site standby.

    i'm not sure what you mean by

    "if my local primary fails I can work from the local failover no problem and as both devices are connected via gig eth, the reindexing shouldnt be a problem. The mirror would obviously be down if the local primary is down.

    I would then need to get the data back into the local primary from the local failover to get it mirrored over to the DR site...?"

    mirroring is part controlled by the sql drivers user in your DSN or connection string in the client app, so when your witness tells the mirror that the primary is down, it will assume the primary and your clients will connect to it. when the original primary comes back up it will receive all the updates from the new primary and can then be failed back. - so in answer - your mirror is NOT down if the primary is down....

    MVDBA

  • ive got it configured without a witness, and I am told by my developers that the system MUST run in ASYNC mode, so therefore failover is manual. If i DONT failoiver when I need to use my Local failover....er...man im getting lost

     

    this could all be moot anyway if reindexing is going to hammer my mirroring....im re-idexing a 150GB dbase and mirroring over a 2Mb SDSL......im testing this tonight, but now I have the distibnct feeling that this is all going to go horribly smelly..

     

     

  • I'm using some bandwidth limiting software on my mirror server - a reindex of a tiny database (which sent over about 40MB of data) took 3 minutes over an equivalent 2MB line.

    The reindex of my main database creates a transaction log of 130GB

    hehehehehhohohohohhahahahahaha

    someone is going to have to:

    • buy a leased line - a FAT one
    • not reindex
    • not want a DR site
    • seriously change some working practices
    • pay me more money to take a tape to Paris every week to restore...

    even if I go and give WanSync 10 grand for thier software, im still going to have problems...

     

    hehehehehhohohohohhahahahahaha

  • ok - look at some other software then to help you.

    try

    imceda(now quest i think) litespeed to compress your t-log backups in log shipping

    or maybe doubletake

    maybe these will cope with the problem over a 2mb wan

    MVDBA

  • yeah ok - hadnt thought of that - but this will ONLY help with the T log shipping, correct? theres nothing I can do to compresss the data thats being mirrored?

  • no - nothing for mirroring that i know of - but doubletake might have some compression.... not sure as have never used it - it's a file level sync tool that might work over wan

    i had a customer that uses it over a laser connection from building to building(<5mb)

    MVDBA

  • Thanks for your help Mike

     

    Alastair Jones.

  • doubletake can replicate databases over a WAN.  In a highly transactional databases you obviously need a larger pipe.  The nice part with doubletake is that they say it can replicate databases and even if the replication is not able to keep up your database will stay intact on the recieving end.  NSI has a calculator that can be used if you need to calculate your bandwidth requirements.  They also have a tool that will calculate the amount of data that would be replicated over a period of time. We have this in place on highly transactional databases ranging from 50 - 500 GB.  Obviously the larger the db the larger the pipe.  We were looking at an OC3 for a client with a 500 GB database but don't be discouraged by that.

    Doubletake will queue it's transactions and get caught up as bandwidth becomes available.  (again the pipe is a function of the amount of data change and how current it needs to be on both sides.)

    I would say 4 servers.  1 witness (small), 1 Primary, 1 secondary and one dr.  Run operations from the primary and mirror to the secondary.  Then use doubletake to replicate to the DR site from the secondary.  This will alleviate cpu\mem\nic limitations of the production db server. (2 nics in the secondary - 1 DT, 1 MIRROR)

    BJ Hermsen

  • Have you considered using the bulk-logged recovery model during the re-index operation.

    If certain criteria are met then Reindex statements can be minimally logged which would reduce network load.

    Check the following page in BOL (2005)

    ms-help://MS.SQLCC.v9/MS.SQLSVR.v9.en/udb9/html/f2a68e0f-9190-45c4-abee-1b2ebbb13817.htm

    The witness server can also be configured on any third server with SQL 2005 installed. The Dev one may be suitable for this as the load is low. However you need to be careful how you site the witness (three separate locations or all three on one LAN is ideal) as well as configuring connections to the primary properly.

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply