June 24, 2005 at 10:03 pm
Hi,
I'm working on a solution wherein I have to automatically auto-insert records into a table(Table xyz) on server B whenever a record is inserted in table(Table xyz) in Server A. Both tables have same schema.
Im able to do it with a trigger on tablexyz in server A and with the foll script:
SET XACT_ABORT ON
BEGIN DISTRIBUTED TRANSACTION
insert into [ServerB].dbname.dbo.tablexyz ... select * from tablexyz
This works fine, but the problem is its taking a lot of overhead i.e. around average of 1.5-2 mins, but if i drop the trigger, then it takes just 20 secs.
Can you please suggest is there any other option wherein we can replace the triggers? I was thinking about writing a script in job but this will check the table again and again in intervals which may cause overhead? Please suggest.
Thanks.
Dilip
June 27, 2005 at 7:36 am
Update: I have created one column table[e.g sysparam] in which I store the count of records from tablexyz in Server A and then i match the count and if this count is greater than sysparam i execute my distributed trans..and this i have scheduled in a job which runs in 2 mins time..till now i think its working okay without much overheads and its just-in-time too that is when record gets inserted only that time it does the dist. transaction over the network..i dont want to go for replication as a solution becoz just one for one table sake, i dont think its wise to set replication..If anyone has any other innovative ideas or suggestions, pls post it..thanks for your time
Best regards!
June 27, 2005 at 8:46 am
If it has to be part of the process, why not simply insert into both from the initial proc?
If you need to make it an async process, something along the lines that you've done is a good approach. Replication you've rejected, but that's an option. Log shipping? Except that it would get you everything, not simply one table.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
June 27, 2005 at 11:07 am
Distributed transactions on a trigger is a "no-no". If the Trigger fails i has to rollback an distributed and a local transaction and on top of that KILL the batch statement ... yuck!
it is better as suggeted above to use a serviced componet on client side, Opent two connections and insert in both sides (without triggers) using a transaction marked component. This is the case when no asynch process can be used other wise use a job to transfer the data
* Noel
June 27, 2005 at 6:07 pm
Thanks for the comments, but its not a part of process which has to run simultaneously on both servers. Actually, this is for a mobile application wherein data from pocket pc comes to the server hosting SQL CE and as soon as the data is replicated in this server, I have to transfer the data into another database on another server which is used by the ERP application.
I'm using the job solution becuase it will execute the dist.transaction only on-demand i.e.whenever the count increases. Thanks again
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply