Deleting commands/transactions for replication

  • Hi,

    I have an issue with one of our replications where it is complaining that the delete it is trying to run cannot be completed, because the row is not there. I go through the process of finding the xact_seqno and removing the command from the distributor, but there seems to be a lot of such errors for this particular article. I tried to reinitialize this one table, but it looks like the reinitialization is at the end of the queue...replication wants to go through all these deletes first. I was wondering, how would I handle this? I'm only intermediate when it comes to replication. Is it possible to delete all the commands/transactions relating to a particular article? Thanks in advance.

  • Until recently I was responsible for the transaction replication of an 86 GB database that contained 580 articles spread over about 20 publications. The connection bandwidth was 10 Mbit per second, so no easy task.

    I soon learned that to out of the box automatic replication was useless for this scenario, even re-initialising 1 publication could take more than a day. Because of this I started dropping subscriptions, using tablediff to generate the scripts to bring the tables back into synchronisation, running the scripts on the subscriber and then re-creating the subscription.

    I maintained a spread sheet that retrieved all of the table names etc from the source database, I used this in Excel formulas that generated the table diff scripts so I could just copy and paste them into the cmd window.

    It's more hand-on, but you get no downtime where the subscriber is unusable. Fixes can be counted in minutes rather than hours.

    This could be a way round your pain.

  • tim.ffitch 25252 (8/8/2016)


    Until recently I was responsible for the transaction replication of an 86 GB database that contained 580 articles spread over about 20 publications. The connection bandwidth was 10 Mbit per second, so no easy task.

    I soon learned that to out of the box automatic replication was useless for this scenario, even re-initialising 1 publication could take more than a day. Because of this I started dropping subscriptions, using tablediff to generate the scripts to bring the tables back into synchronisation, running the scripts on the subscriber and then re-creating the subscription.

    I maintained a spread sheet that retrieved all of the table names etc from the source database, I used this in Excel formulas that generated the table diff scripts so I could just copy and paste them into the cmd window.

    It's more hand-on, but you get no downtime where the subscriber is unusable. Fixes can be counted in minutes rather than hours.

    This could be a way round your pain.

    Be a great article to help others here. If you're willing to write.

  • Hello:

    If you're using transactional replication, replicated objects are supposed to be "read only" at

    subscribers side, if they don't, then you have to re-think your approach.

    I think You should better try to reinitialize the entire subscription. If this is not possible, then you could try to change your Distribution Agent Profile from "Default agent profile" to "Continue on data consistency errors" temporarily.

  • If you are using replication stored procedures to replicate then you can temporarily modify the delete stored procedure to not raise an error when records are not found at the subscriber.

  • Steve Jones - SSC Editor (8/8/2016)


    tim.ffitch 25252 (8/8/2016)


    Until recently I was responsible for the transaction replication of an 86 GB database that contained 580 articles spread over about 20 publications. The connection bandwidth was 10 Mbit per second, so no easy task.

    I soon learned that to out of the box automatic replication was useless for this scenario, even re-initialising 1 publication could take more than a day. Because of this I started dropping subscriptions, using tablediff to generate the scripts to bring the tables back into synchronisation, running the scripts on the subscriber and then re-creating the subscription.

    I maintained a spread sheet that retrieved all of the table names etc from the source database, I used this in Excel formulas that generated the table diff scripts so I could just copy and paste them into the cmd window.

    It's more hand-on, but you get no downtime where the subscriber is unusable. Fixes can be counted in minutes rather than hours.

    This could be a way round your pain.

    Be a great article to help others here. If you're willing to write.

    Sadly I am no longer working for the company, and in my current role we have no current need for replication. Due to being a foster carer as well I have no time outside of work to write such an article. I did struggle myself with replication, trying to find useful sources of information was not easy, it does seem to be one area where you have to read 50 pages of garbage to find 1 golden nugget buried.

    I realise my solution may not be the best, but it is proven to work, and you can fix a single table without having to re-initialise an entire subscription.

    On the subject of the subscriber database being read-only, raised by another poster. We had a need for it not to be, as part of the load process we had tables that were rebuilt at the subscriber after each load to better suite the need of the users. Doing it this way meant we didn't have to replicate even more data. Also additional tables stored data that were loaded directly from other sources. The BI team also needed to create stored procedures for their report etc that were not needed in the source db. In fact the source db was only needed to enable the replication of the data from a 3rd party database system. We ran 2 databases in tandem with db1 being wiped and re-loaded every night and db2 being used to detect changes so we could build up the transactions to replicate. This was all needed because of the 10Mbit connection.

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply