Regarding blocking in sql server 2000

  • Hi Guys,

    In my company, the database structure like:

    Database A replicated data into database B using transaction replication. In

    database B, there are some triggers on the tables which have subscription on

    them. the triggers put data that replicated from Database A into anther table

    A. then the client aqpplication retrieve data from the table A.

    Today, we got problem that client populated data in database A very large,

    which caused replication and triggers worked very long and slowed down the

    database, as well as it blocked the client application to retrieve data from

    table A. we cannot put nolock in the client application because this is a

    critic application and don't allow error occured. I wonder if there is a way

    that can optimize the database so that it doesn't slow down the database when

    large bulk data are input in the database A. Can someone give me suggestion?

    Thanks. 

  • Bulk load the data into some staging table and then move it over a few rows at a time.

    the other thing is to examine the triggers and ensure you're not doing too much work in them and that it really needs to be in the triggers and not something you process after inserts are complete.

    Blocking is not a bad thing. It keep integrity in your database. It mostly sounds like you haven't designed things to handle higher volumes of data. Faster machines help as well.

  • Hi guys,

    Somebody told me that microsoft doesn't recommand to use replication and trigger together. Is that true? How can I work around? Thanks.

  • > Somebody told me that microsoft doesn't recommand to use replication and trigger together. Is that true? How can I work around? Thanks.

    I haven't heard such, although that certainly doesn't mean it isn't true.

    Can you replicate the data directly to the appropriate table on the second server, instead of using a table with a trigger on it?  I am not sure that would help since it is transactional replication and the large batch insert will be (I would guess) one large transaction that couldn't be split. 

    Can you modify the large batch inserts to be smaller sets of data at one time?  If you can modify the load it would avoid the overhead of the previously suggested staging table/looping mechanism.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • TheSQLGuru,

    Thank you very much. why I use a table with a trigger on it is that for performance. Because I have a lot oftables being replicated and a lot of join, so I create another table that is populated by triggers, then client application retireve data directly from that table. So I have no way to use trigger to populate the data. Other question that can I setup the paramters of replication and force the batch size to smaller that make the transaction smaller to transfer data to the subscriber no matter how bigger the client upload the data. Thanks.

     

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply