Replicating loosely coupled SQL Servers

  • Can anyone suggest a method for replicating SQL Server 7.0, where there are 15+ SQL Servers at remote locations and the only communications link between them is 56k analogue modem?

    At present the servers effectively work independantly with users inserting / editing / deleting data at any of the remote locations. Latency is ok (replication once every 24hrs). All of the servers need the same data no matter where data updates where performed.

  • There are a lot of things to consider before attempting replication between these servers. First is the number of changes that occur each day. With that slow of a connection you could create bottlenecks by establishing replication. If you know the changes would only be 10 to 20 transactions per server then you might be ok. If you are talking about Thousands of transactions per server per day then I think you will run into many problems and it could become a nightmare.

    For what you have mentioned here, it sounds like you would need merge replication and you can set it to run at whatever frequency works best.

    Another thing to keep in mind is if there are too many transactions to replicate then you may need a seperate server or two to use as your distributors. If each server only has 10 or 20 changes per day then you might be ok setting up a distributor on each server.

    These are only general guidelines. I recommend you set up at least 3 or 4 test machines connecting with the same 56k analogue modem and making sure more transactions take place than would normally happen on any given day. That way you can see what happens before you set up your production machines to use replication.

    Robert Marda

    SQL Server will deliver its data any way you want it

    when you give your SQL Programmer enough developing time.

    Robert W. Marda
    Billing and OSS Specialist - SQL Programmer
    MCL Systems

  • Also depends on the complexity of the transactions. Given that you have 24 hours to update I think you'll do better than 10-20 transactions per server before hitting a limit. Definitely agree with Robert that you need to set up a couple and test. I know merge is the obvious choice, but I think transactional with updating/queued subscribers might have less overhead and...to me...is easier to work with! One other issue you need to think about is how you'll handle the situation where you need to generate a new snapshot because something has gone wrong. Could take a long time to burn a snapshot over 56k.

    Andy

    http://www.sqlservercentral.com/columnists/awarren/

  • I agree with other contributors and add the following info from real usage in a very similar scenario.

    The maximum number of (SQL2000 merge) simple 'transactions' I have down over a 64K dial up ISDN is about 50000 - it took several hours, when it took only a minute or 2 on the LAN. Our 'Normal 15 minute updates' over 64K (1 sec dial up time) with a mixed bag of say between 10 - 100 inserts-updates-deletes spread over 60 simply replicated tables take about a minute. Another publication with 30 'complexly replicated' tables takes about twice as long for say 100 ransactions. SQL 2000 is significantly faster/better than 7 and is worth a look if this is business critical.

    Keep the distributor on the server for this size of pipe on merge as it does little anyway.

    I always stagger the updates to reduce locking possibilities, but have never come across any.

    The replication should work fine but will be slow. If you have tables that need large updates and you use the lines for other traffic I suggest you do them out of hours and be careful that your application does not do large updates without you knowing about it.

    If you are using complex replication - join filters etc then replication time on a slow line is signficantly increased over 'straight' tables.

    Before you choose and implement your replication have you considered in detail the usual questions re conflicts, schema changes etc etc. I cannot overstate the amount of planning and testing on the scenario you are looking at to get it running sweetly with that small a pipe, particularly, as Andy says snapshot delivery time will also be critical if you have to reapply. Easily overlooked until there is a problem, when you could be looking at a very major problem.

  • Thanks for the advice so far. Is it possible to configure replication so that one of the servers pulls the data from all the remote sites then pushes the data back out to all of the remote sites? In SQL Server terminology I think this may be called central subscriber / central publisher. If this is possible is there any documentation anywhere for me to follow?

  • Maneffa, those are great comments, very nice to have some numbers to guesstimate from.

    If you use merge or transacction with updating you'll get what you need. Since all locations need all data life is simpler since you don't need to filter the subscriptions. Lets say you make an update on subscriber A, it will push the change back to Publisher (how depends on how you're replicationg of course), then the Publisher will distribute those changes to all the other subscribers.

    Andy

    http://www.sqlservercentral.com/columnists/awarren/

  • Thanks everyone - I'll start building a test system now.

    I am sure I will be posting here again!

  • I have a very similar dilemma. I have several sites set up to 'push' data to the subscriber. The sites are currently set up as the distributor and the publisher (same box), running SQL 7.0 and the subscriber is running SQL 2000. They push the data through a T1 connection.

    We are currently experiencing minor issues, primarily dealing with SQL 7.0 bugs. Our intention is to upgrade the remote sites as well. In the meantime, we are rebuilding our database schema. Our debate is: Is it better to have 100 dB's replicate to 1 dB or to replicate 100 dB's to 100 dB's? Our system will be replicating 1200 transactions per second. We realize that there are pro's and con's to both, however, I know someone has got to be doing this out there and your experirnce would be greatly appreciated. Any advice, comments, recommedations or experiences are welcomed.

    Christine


    Aurora

  • I currently support about 200 db's replicating to 200 subscriber db's on a 100m network with no issues. Issue I ran into was locking in distribution db, finally when to a quasi shared log reader. See my article here on the site about non-continuous replication.

    Andy

    http://www.sqlservercentral.com/columnists/awarren/

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply