April 24, 2007 at 6:52 am
Hi Guys,
In my company, the database structure like:
Database A replicated data into database B using transaction replication. In
database B, there are some triggers on the tables which have subscription on
them. the triggers put data that replicated from Database A into anther table
A. then the client aqpplication retrieve data from the table A.
Today, we got problem that client populated data in database A very large,
which caused replication and triggers worked very long and slowed down the
database, as well as it blocked the client application to retrieve data from
table A. we cannot put nolock in the client application because this is a
critic application and don't allow error occured. I wonder if there is a way
that can optimize the database so that it doesn't slow down the database when
large bulk data are input in the database A. Can someone give me suggestion?
Thanks.
April 24, 2007 at 7:39 am
Jerry,
You could use a clustered index with a fillfactor that leaves some space on the datapages (e.g. 50%) and rebuild it in off peak hours. Maybe you can make the batch size of the updates smaller in database A?
Jan
April 30, 2007 at 10:51 am
don't use triggers!! Triggers will always degrade performance especially in an oltp database. Find another way.
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
April 30, 2007 at 1:04 pm
Hi
Thank you very much. Yes I decide to give up trigger in favour of using job to run the stored procedure. Thanks.
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply