April 3, 2008 at 12:09 am
Comments posted to this topic are about the item Transactions in Transactional Replication
Paul Ibison
Paul.Ibison@replicationanswers.com
April 3, 2008 at 7:27 am
Great article, thanks!
April 3, 2008 at 9:11 am
Another reason for being very careful with the MaxCmdsInTran parameter is that it can cause SQL Server to fail with error 9003. Basically, SQL Server gets its LSNs in a mess. You can recover from it with removing replication and putting it back on but only by deleting rows from msrepl_transactions and inserting what SQL Server "expects" to find there.
April 3, 2008 at 1:50 pm
We are considering setting up a Transactional Replication and have some questions you could possible help us with.
When continuously, how often does the log reader agent update? Does it start as soon as the last Transaction is completed? If you don't set a scheduled time does that mean you are in the continuous mode?
We plan to have the subscriber as read only, is there a way to guarantee this beyond setting role\user permissions restrictions?
Eventually we want to break the replication and use the subscriber as the active database. What would be the best way to do this and have the most up to date information on the subscriber at time of separation?
April 4, 2008 at 1:49 am
You are correct. When setting up replication you have the choice to run them either on a schedule or continuously. Which you chose depends on a number of factors. We always run in continuous mode
because we replicate very large volumes of data and running on a schedule would introduce latency which we don't want. Prior to SQL Server 2005, if an agent failed for some reason, e.g. network loss, that was it. It was stopped. Therefore, I have had to develop a complete monitoring system to look for any failed agents and restart them. SQL Server 2005 retries, though I still need my monitoring code. So, if you can live with a bit of latency the scheduled run option may be a better bet for you.
In continuous mode, prior to SQL Server 2005, the Log Reader Agent scanned the log file every 10 seconds by default. In SQL Server 2005 this has changed to every 5 seconds. You can override this by creating your own profile and changing the value of the PollingInterval parameter. We leave it as it is.
On to your subscriber as read only question. We have a SQL Server login specifically for the replication agents. The login requires public access to master (as of SQL Server 2000) and then we give it db_owner access in the publisher and subscriber databases. Then, as you say, user permissions can be set to ensure read only access.
There's a lot around your last question so it's difficult to give more than a bit of a woolly answer. To ensure the subscriber is up to date you would have to stop writing at the publisher and make sure all transactions have replicated. The Replication Monitor (SQL Server 2005) or Enterprise Manager can be used to determine this. Slight gripe now. SQL Server 2005 Replication Monitor is a major step backwards. It's awful, though I seem to be alone in this judgement from articles I've read. Anyway,
once all the transactions have replicated just drop the subscription(s).
Presumably, whatever was updating the publisher will now be writing to the new active database to keep it up to date.
I waffled on a bit but I hope all that helps.
April 4, 2008 at 2:59 am
Great article
I wanted to share a problem I had with Failing Transactions on the Subscriber. I had set the distribution agent subscription streams parameter to 4(no. of processors on the server)
BOL says
When you specify a value of 2 or greater for -SubscriptionStreams, the order in which transactions are received at the Subscriber may differ from the order in which they were made at the Publisher. If this behavior causes constraint violations during synchronization, you should use the NOT FOR REPLICATION option to disable the enforcement of constraints during synchronization. For more information, see How to: Control the Behavior of Triggers and Constraints During Synchronization (Replication Transact-SQL Programming).
I found that the order the transactions were arriving at the publisher caused a deadlock and so some of the transactions were killed at the subscriber leaving inconsistent data. i think this is the only scenario where the transaction commited on a publisher are not replicated to the subscriber in a consistent state.
April 4, 2008 at 8:48 am
mdowns (4/4/2008)
Prior to SQL Server 2005, if an agent failed for some reason, e.g. network loss, that was it. It was stopped. Therefore, I have had to develop a complete monitoring system to look for any failed agents and restart them. SQL Server 2005 retries, though I still need my monitoring code.
Instead of the "complete monitoring system", you could simply add another schedule for the existing job of every one minute. This way it would restart automatically if it fails.
The continuous/non-continuous setting is done by simply adding/removing the "-continuous" flag in the agent. Can be done regardless of the scheduling.
April 4, 2008 at 9:04 am
meird
We did consider such things but with 439 Log Reader Agents and 2445 Distribution Agents and given that we originally delivered this on SQL Server 7 we decided against it. SQL Server 7 was nowhere near as flexible as 2000/2005 and all too often upgrades to our software required subscriptions and publications to be dropped and re-created which would mean having to put back in the job step you describe. Too much of an administrative overhead back then.
April 7, 2008 at 4:18 am
Great article! Demystifies some of the worries that I've had about replication.
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply