December 6, 2012 at 2:59 am
I'm currently testing a merge replication setup and I've discovered a problem with it. I ran a delete statement on two tables on my publisher and after running SQL Data Compare I found that some rows on both tables did not delete. There are no errors reported in the replication monitor. My question is, how can I troubleshoot further what went wrong here?
December 6, 2012 at 4:56 pm
TEst env right? so should be easier to isolate.. delete on pub, then fire merge agent while profiler running on subscriber.
Hth,
Chris
mssqlconsulting.com
Chris Becker bcsdata.net
December 7, 2012 at 7:25 am
Chris,
Thanks for the reply. I can do what you suggest and everything will work fine. My problem is I deleted over 20,000 records from two tables in the publisher. 6 records were not deleted in one table and 18 in the other. Any subsequent transactions have been fine. I'm trying to find out why these 24 records were not deleted as they should have been.
It was a pretty simple test - I loaded a bunch of data on my publisher, watched it replicate in the monitor and ran the Redgate compare tool to check that it worked. Everything was fine at this point. I then deleted the test data from the publisher, ran the compare tool again and it showed up these 24 records in the subscriber.
I should point out also that both the publisher and the subscriber were connected to the network during this test. The subscriber however is in a different site, over 100 miles away. I'm beginning to wonder did something happen on the network - dropped packets maybe??
Either way, I can't really conclude that merge replication is a reliable approach for my solution after this test.
December 7, 2012 at 12:29 pm
Foreign keys can cause issues as dml can be delivered out of order, ie parent deleted before children. Make sure you have NOT FOR REPLICATION on fk constraints, we remove alltogether.
Also, 20k deletes is more batch processing, merge over a wan will have issues w/ such large dml requests. Better suited for transactional. So run your tests again and split up the dml. You will need to communicate this to your developers too, merge works well in oltp, bad in bulk load environments.
With that being said, merge is stable, reliable, you get perfect execution, and have nice gui & options for conflict resolution. The only negative is an addl rowguid column for each article.
Peer to peer is good if you need active/active, however is expensive requiring Ent Ed. Is basically bi-directional transactional replication, is fast & reliable and doesn't add addl schema. And I believe in 2012 a conflict viewer feature was added.
Chris Becker bcsdata.net
December 10, 2012 at 2:15 am
Chris,
Some great info there, thank you. The 20k deletes was just to remove the test data, its not something that would occur regularly in the live environment. The normal workload is a steady stream of inserts and a small amount of updates. Deletes will be rare.
Peer-to-peer sounds like it would be the best fit for what I need but unfortunately I'm stuck with 2005 Standard for this project. The basic requirement is to have a secondary copy of the database that is in a ready state so that the application stays up if the WAN link fails over to the DR site.
If I create the FK constraints with NOT FOR REPLICATION, should I create them manually at the subscriber? Or does that defeat the purpose?
December 10, 2012 at 5:37 am
Create at both pub & sub. I personally do not like repl as a DR tool, esp merge, as this is slow over a wan link and not scaleable. Also, you are only protected down to tables & data.
I prefer log shipping, you get a perfect copy of entire db, latency is a bit slower however when you failover just bump up your identities to allow for data to be recovered.
If you are sticking with replication, transactional is not a bad idea, here you don't have to add an extra column for every article you want at dr site. Latency and performance is great over a wan. And when you failover, just setup trans pubs to replicate back to your primary site. But again, only tables and data, use red-gate sql compare to keep everything else up to date wheter you use merge or trans.
Good luck ; at least you are setting up somehting remote, most don't even think about...
Chris Becker bcsdata.net
December 11, 2012 at 9:54 am
I actually started with transactional but then I got worried about identity ranges so I switched to merge instead to utilize the automatic identity range management. Log shipping is more familiar territory for me, but the big requirement on this project is that the application can switch to the DR database without any manual intervention.
I've got a bit more time for experimentation so I might try some other options. Thanks very much for your comments Chris.
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply