May 19, 2023 at 11:59 pm
Apologies for the title, I'll explain more clearly.
I have two tables and each has a trigger applied.
The first table trigger is a simple audit style (AFTER INSERT/UPDATE) trigger that takes a copy of the modified record and inserts it into a Log Table.
On the Log Table, I'm using another (AFTER INSERT/UPDATE) trigger to update another table.
There is a bug in the second trigger I'll resolve.
However what I'm seeing is the failure in the second trigger is causing a rollback all the way back to the Primary table.
I was hoping to have some level of isolation between the primary and secondary table, mostly for speed of transaction. The second table I was hoping to use a kind of cache and just process it as needed keeping the Primary table largey free.
If I disable the second trigger the primary trigger runs fine.
I was aware of trigger failures causing rollbacks of transaction but didn't expect a chain of transactions to be like this.
Other than correcting the faulty code, is there any method of isolation or is it a true domino effect?
thanks
Andrew
PS I know the premise is "Is there any other way than using triggers?", in this case the answer is no.
May 20, 2023 at 4:05 pm
It is difficult to tell exactly what you are doing but a trigger on a log table sounds a bad idea. All the triggers will be within the transaction so either everything or nothing will commit. (It is possible to rollback to a savepoint but I suspect you do not really want to go there.)
If you want to increase the speed of the transaction you might just get your first trigger to add to a Service Broker queue. Service Broker can then process the queue asynchronously allowing your transaction to commit quicky. If you think Service Broker is too much trouble, you could also look at adding a datetime to the log table and have a task process it every minute.
May 28, 2023 at 12:44 pm
Hello Ken.
Thanks for the input.
I took your advice about Service Brokers and have run into another issue.
I've opened another post here if you're interested in helping out.
May 28, 2023 at 4:22 pm
Just to be sure, a LOG or History or Audit or Temporal table should never have any triggers or constraints on it that evaluates data. Certainly, it should not be used as an interim part of a process.
--Jeff Moden
Change is inevitable... Change for the better is not.
May 29, 2023 at 7:14 am
This was removed by the editor as SPAM
June 2, 2023 at 11:03 am
A trigger is always handled in the scope of one transaction.
Cascading triggers ( as you implemented ) are handled in the scope of the same transaction.
So to get around your "second trigger" issue, you'll have to make it bug free.
i.e. write it in such a way the current bug does not occur.
Guidelines:
- triggers should have a small footprint
- only have a trigger process the strictly needed (key-)data
- process the remainder of the data asynchrone if you can.
Keep in mind Temporal tables are available, but play around with that concept before you implement it.
We still prefer just recording the primary tables key column(s) + metadata (datetime of trigger execution) and process everything else with a batch job, based on the stored keys.
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
June 5, 2023 at 6:00 am
Thanks Johan.
I have since traced the issue to the user context executing the change in another database. That's where the trigger is breaking.
The logic is fine. The user executing the logic is the problem.
Appreciate the advcie.
regards
Andrew
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply