October 24, 2013 at 2:19 am
We had an issue recently where a (transactional) replicated table was replicating data as expected.
Then about 30 or so rows in the source table were not in the destination table, but other rows created after those 30 rows were replicated.
We have pretty much confirmed that users did not delete those rows.
Unfortunately we had to resolve the issue quickly and so blew away & recreated the subscription so a lot of evidence is probably gone from the crime scene.
We cant figure out what could cause 30 rows not to be replicated, yet leave replication operational.
Anyone got any ideas?
October 25, 2013 at 4:46 pm
One option you could consider if you have the transaction logs from the subscriber is a detailed analysis of these using a third party log reader or the undocumented function fn_dump_dblog. Check Paul Randals blog for details on this function and the warning on its usage. What you describe just should not happen but the transactions logs should give you the options for a forensic investigation and you should be able to show what happened all the way from the publisher through to the subscriber.
While the table size may have precluded this but did you consider tablediff which could have generated the tsql to sync the table at the subscriber which may have been quicker than re-initialising
Viewing 2 posts - 1 through 1 (of 1 total)
You must be logged in to reply to this topic. Login to reply