April 4, 2007 at 10:06 am
Attempting to perform a select from an executable that is fired from a trigger but table is locked by trigger. Any suggestions on how to do an unlock on the trigger?
April 4, 2007 at 12:11 pm
First of all, calling an executable from within a trigger would be considered not a 'best' practice. I would guess that this executable must then connect itself to the DB? You would then be out of the scope of the transaction in which you were working in the trigger. There is no way to remove the locks that the transaction has placed until the transaction is complete and the transaction cannot complete as it is waiting for the executable to finish and return control to the trigger, hence your deadlock. You may want to look at using the readpast locking hint to bypass locked rows. If the executable is relying on seeing the data that is being modified within the transaction that fired the trigger, you may be out of luck with this approach. In this event, you may want to reconsider the design of your operation. What exactly are you trying to do with this approach?
April 4, 2007 at 12:44 pm
Thank you for the reply, it appears that a redesign is the best option at this time so we've began this. We are now focusing on having the trigger fire a process that then spins off another process which will hopefully end the trigger and alllow the newly spawned process to access the DB and complete the fill request. Too bad about the locking issue with a trigger. Perhaps in future releases the SQL folks will consider a threading ability.
April 4, 2007 at 8:06 pm
Um... not sure... but I think you may have missed the point...
What does the "executable" actually do? And for how many rows per call does it do it?
--Jeff Moden
Change is inevitable... Change for the better is not.
April 5, 2007 at 7:35 am
With asynchronous triggers in SQL 2005, you could do this. I would still agree that it is probably not a great idea. You could also se the isolation level of your connection from your executable to read dirty pages and it could ignore the locks.
But at the end of all of this, launching an application through a trigger is probably not the most appropriate way to solve your problem.
April 5, 2007 at 8:46 am
the issue is that it is a separate connection that is being spawned. Can the work be done using the process of the trigger, because nothing would be locked to this.
I had a case once where a process spawned another process and was blocked by the first. The problem was that the locking was on an index, and thus in DEV I did not have my indexes up to date, and the optimizer didn't use them.
Hence, the issue did not show up in DEV, only when we went to PROD. Obviously, not a good situation and a rather embarassing post-mortem.
Stay within a single process as much as possible....
April 5, 2007 at 4:24 pm
I still want Brian to tell us what the EXE does and how many rows per call it's going to handle.
--Jeff Moden
Change is inevitable... Change for the better is not.
April 6, 2007 at 8:25 am
Sorry for the delayed response, that work thing gets in the way sometimes. So, the work around to the issue was to create a second executable (B.exe) that was fired by the first executable (A.exe) that was fired from the trigger. This then allowed the trigger to end as it should since A.exe ended as soon as B.exe was instructed to begin.
The trigger would then withdraw any lock that it had on the table that held the row of data needed. This was a date change event. User changes a date, trigger fires, parms passed to B.exe, data collected and life is good.
Thanks to all for the comments and recommendations, I look forward to contributed as well.
April 6, 2007 at 5:22 pm
Somebody's not listening, Brian. WHAT does the EXE do? The reason I ask is because you may not need the darned thing at all.
--Jeff Moden
Change is inevitable... Change for the better is not.
April 9, 2007 at 6:23 pm
The whole reason that database servers lock records for the duration of a transaction is because they have considered the fact that multiple clients may be trying to change the same row, and if they are allowed to do it simultaneously there's no way to guarantee the consistency of the data. You are trying to violate fundamental principals of database design. There are solutions if you look a little further.
This is the kind of problem Notification Services is designed to handle. If you set up the external program as a Notification Services event subscriber, the trigger could use the event submission stored procedures to send it the modifed rows. This has the benefit of letting the database server get on with it's life without waiting for the external program.
Your problem is caused by isolation, you could use different isolation modes. Snapshot Isolation lets other connections read the table being modified by the trigger, but they would only be able to see the data as it looked before the DML statement that caused the trigger. Uncommited isolation (NOLOCK, "dirty read") would let the external program read the data modified by the trigger, but it can suffer from phantom reads and other problems.
If you wanted to guarantee your job security for life, you could get creative and trick transactional replication into doing your bidding. Create a publication for the table and let the log reader agent watch for any changes, then pervert the distribution agent into running your other program instead of copying the data to a subscriber.
But please stop complaining about those pesky locks that keep your triggers from working.
April 9, 2007 at 7:50 pm
...and, still, we don't know what the EXE is doing...
--Jeff Moden
Change is inevitable... Change for the better is not.
April 9, 2007 at 8:44 pm
First , thanks so much for all the input and comments. Since I'm new to development but not to IT some of the comments or explanantions may be vague, sorry. I understand the concept of relational databases and the need for locks, didn't mean to whine about the trigger issue. I guess I felt bad for the lead developer having to struggle with this and posted simply for suggestions which everyone has generously contributed. Okay here goes my understanding of what the exe is doing.
Primary function is to build xml that contains field information derived from a table. This xml file is then sent to a 3rd party for processing. So that's about it, exe is used to build xml.
April 9, 2007 at 9:30 pm
The developer may not know it, but SQL Server can do a pretty good job of generating XML from a table all by itself. I don't know what the "sent to a 3rd party" step involves, but the external program may be unnecessary if this can be handled in a stored proc. This technique may still leave the transaction waiting until the whole data transfer is complete however.
The Notification Services solution still looks like an attractive option.
It's not clear what latency is required. Doing this in a trigger means the INSERT or UPDATE is on hold until the external program completes, which is a real drag on your server. Is this a disconnected process, or does the trigger need a response from the third party to complete the transaction? Could you log the changes to a separate table, then have a SQL Agent job wake up every 5 or 10 minutes and process all the rows it finds? If several minutes is too long to wait, and if the user making the changes is the owner of the SQL Agent job and is in the SQLAgentUserRole, the trigger could use sp_start_job to kick off the job rather than waiting for the scheduled interval. If those security restrictions can't be met, an alert could be defined to kick off the job in response to a specific error number and a RAISERROR WITH LOG statement in the trigger (using a low severity number so it doesn't abort the transaction) would start the job. Any of these techniques would probably let the trigger finish and the transaction commit before the agent job gets going and tries to query the record.
April 10, 2007 at 10:15 am
This is a disconnected process where upon a date change, the trigger is fired and the process of the XML generation is invoked. XML is then sent as a notification of change to a date which launches another series of events outside of scope. The work around we've come up with of having the trigger run an exe that calls the exe that's actually doing the database work, does seem to be working alright, however your recommendations and suggestions will certainly be explored. Thanks.
April 10, 2007 at 1:10 pm
I would definately consider re-designing the process here. Many database developers will try to code things into triggers simply because the word 'Trigger' makes them think to use triggers to trigger some other external process. In this case, your developer ran into problems because he wants to access data that has not yet been committed and SQL Servers locking mechanisms were preventing that. By doing what you've done here with having the exe call another exe so that the control can return to the trigger and the trigger can complete, you are violating the basic laws of data integrity. Your executable will run and assume that the transaction completed successfully and that the data exists. What happens when the trigger exe calls the external process exe and then something happens and the transaction fails and rolls back. At this point, your external exe has already been told to run and send data to your 3rd party and that data my not be accurate or what they expect to see.
I would recommend following through with what Scott suggested with creating a working table for the SQL Agent to use to determine if the external process should run. Have your trigger insert a row into this working table and the SQL Agent process will come along and trigger the exe based off of that row.
Just my .02
Viewing 15 posts - 1 through 15 (of 23 total)
You must be logged in to reply to this topic. Login to reply