Transactional Replication -- Large Updates

  • I have an application that updates many rows as a single transaction (> 100,000 Rows).

    This is taking a long time to post to the subscriber.  The main reason appears to be that a lock is being taken on each row updated and additional locks are being taken on many of the non clustered indexs.  This causes the lock manager to have to deal with more than 1 millions locks.

    Assuming that the application cannot be modified is there anything that I can do to make the subscriber update run quicker.  

      Thanks in Advance,

               Rick

     

     

  • Is it row-by-row update using cursor?

    If not why lock each row, not the whole table?

    Is it SP updating the table or query from application?

    "Assuming that the application cannot be modified" and update statement is a part of the application you have nothing to do with it.

    Best thing you can do is to drop indexes before update and recreate them after it's finished. But it must be part of update process and again we have "application cannot be modified"

     

    _____________
    Code for TallyGenerator

  • The application does an update by joining the 100,000 rows from the base table to a table in a staging database.  On the publisher (where the original update occurs) it appears to be smart enough to escalate to a table lock.

    On the subscriber I am seeing many ROW and PAGE locks but they never escalate to a table lock.  I've considered droping the indexes but this would be somewhat disruptive to the ad-hoc query users (they can still access the data using the (nolock) hint).

    I am considering trying to use the MaxCmdsInTran specification to break the large updates apart (has anybody had any experience with this feature).  But I'm not sure how I would restart processing on the Disaster Recovery server if only part of the update has been committed.

    Log Reader Agent Properties

    Altering the -MaxCmdsInTran setting to improve elapsed time and latency when handling transactions that contain a large number of commands. This parameter is new in SQL Server 2000 SP1.

    -MaxCmdsInTran allows the Log Reader Agent to break transactions consisting of a large number of commands into smaller transactions, or chunks, which reduces blocking at the Distribution Agent. The Distribution Agent can start processing early chunks while the Log Reader Agent is working through the later chunks of the same transaction, thus improving parallelism between the two agents. However, using this property also means these chunks are committed at the Subscriber as individual transactions. In theory, using the MaxCmdsInTran property breaks the atomicity rule, which states that a transaction must be committed as all or nothing. This is not necessarily problematic because the transaction has already been committed at the Publisher, but users should be aware of this aspect of using -MaxCmdsInTran.

    ---------------------------------------------------------------------

    I'm not sure how to determine if the bottleneck is caused by the excessive locking or by something else.  We have concerns that the communication link between the two servers does not have adequate capacity (DR Site is 15 miles from production site).  I look at the NIC on the DR server and it never exceeds 15% (15 mbs).  Does anybody know how I can look at the line saturation (or do I need to talk to a network guy).

     

    Thanks for the Input!

     

            Rick

     

     

     

Viewing 3 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic. Login to reply