distribution applies same transaction bundle repeatedly

  • Here’s the summary of our conundrum: We have a publication with 187 table-based articles. The publication is initialized via backup using the replication support only option during the call to sp_addsubscription. After starting the distribution and log reader agents and applying transactions against the publisher everything will run correctly for awhile and then we will have a batch of commands that successfully is moved to the distribution database and subsequently replicated to the subscriber. And then the distribution agent decides to apply the same batch again- same XACT_SEQNO.

    Because of this we see failure on primary key inserts if the commands are inserts and row not found at subscriber if they are deletes. This happens on inserts into a table with an identity pkey as well as tables with unique indexes; the identity property at the subscriber has been removed and the call to sp_addarticle specifies an identity range management option of “manual.” I mention the identity column to point out that there's no way we could accidentally create dup keys at the publisher side without doing a set identity_insert; this behavior is showing up on tables with and without identity pkeys.

    Turning on skip errors prevents these from blocking replication but during our last run we saw a tx load of 5/sec for these sorts of errors at the subscriber with skip turned on. Has anyone encountered this sort of behavior? We want to know how to prevent the situation from occurring or baring that a work around that doesn’t compromise our tx log.

    Here are prototypes for our API calls to create the pub 'n articles (and supporting filters) in the order they are executed-

    EXECUTE sp_addpublication

    @publication = @PubName

    , @description = @PubDescription

    , @retention = 0

    , @status = 'active'

    , @independent_agent = 'true'

    , @allow_pull = N'true'

    , @replicate_ddl = 0

    , @allow_initialize_from_backup = N'true';

    EXECUTE sp_addarticle

    @publication = @PubName

    , @article = @ArticleName

    , @description = @SrcTableDescription

    , @source_owner =@SrcTableOwner

    , @source_object = @SrcTableName

    , @vertical_partition = @VerticalPartitionTF

    , @pre_creation_cmd = N'none'

    , @identityrangemanagementoption = N'manual'

    , @destination_table = @DestTableName

    , @destination_owner = @DestTableOwner

    , @del_cmd = @DeleteCommand;

    EXECUTE sp_articlefilter

    @publication = @PubName

    , @article = @ArticleName

    , @filter_clause = @HorizontalFilterClause

    , @filter_name = @HorizontalFilterName;

    EXECUTE sp_articleview

    @publication = @PubName

    , @article = @ArticleName

    , @filter_clause = @FilterWhereClause;

    EXECUTE sp_addsubscription

    @publication = @PubName

    , @subscriber = @SubscribingServer

    , @destination_db = @DestinationDb

    , @subscription_type = N'push'

    , @sync_type = N'replication support only';

    -and yes- the two dbs are in synch at start (via detach / file copy =)

    Thanks in advance for any ideas.

  • Follow-up:

    If you should ever face this issue, for us it had a rather cryptic cause.

    If you use @del_cmd = "NONE" then replication will not create the delete service routines for the subscriber.

    If you update a unique index it changes the update to a delete/insert pair.

    If the distributor calls a missing interface it doesn't complain.

    So a delete/insert just looks like an insert for a row that is already there- dup key.

    Why MS doesn't mention that you can't block deletes without potentially messing up updates? We had an open service ticket and the resource said he had a number of open cases that had similar symptoms but no solutions ... it would have been nice if distribution agent issued warnings about absent stored proceedures at the subscriber.

Viewing 2 posts - 1 through 1 (of 1 total)

You must be logged in to reply to this topic. Login to reply