April 12, 2007 at 11:03 am
Michael:
Your comments\suggestion are informative, thanks for making very clear.
April 12, 2007 at 11:32 am
Sergiy, Michael:
All of your answers have been really helpful.
Can you see any advantages of the creation of the global temporary table as opposed to:
EXEC @LastErrorCode = sp_getapplock @Resource = 'HE_EXPLODE',
@LockMode = 'Exclusive',
@LockOwner = 'Session',
@LockTimeout = 3000
Tom
April 12, 2007 at 12:03 pm
If you use a global temp table, you could also log in the table information as the procedure processes (what step it is at, who ran it, etc) and you would be able to easily see it from other processes.
It has the down side that you have effectively created a temp table name that you now cannot reuse in another procedure.
Getting a lock will take less system resources, but I don't think either solution will present you with a performance issue. You can also use the lock timeout to have the procedure wait a pre-determined amount of time so if a user happens to run it at the very end of another user running it, it will wait and then run rather than giving them an error.
You just may want to play around with the options and pick the one you like better, I think you could argue forever about which one really is the best solution (if either of them actually are).
Good luck
April 12, 2007 at 1:04 pm
"It has the down side that you have effectively created a temp table name that you now cannot reuse in another procedure."
Michael, I think what Sergiy was impying here is that you would name your global temp table after the SP that creates it. He used the generic name, but I think what he meant was for a stored procedure named proc_MySp would create a table ##proc_MySp and the streod procedure proc_YourSp would create a table ##proc_YourSp and so on.
I do agree that the global temp table option would allow for more verbose logging of the SP activity. I also agree that both solutions would not have any performance impacts.
April 12, 2007 at 6:30 pm
I don't think lock is an option at all.
You cannot lock the code, you lock resources used in the code.
If nature of queries in SP will force to apply row level locks it could cause deadlock, when 1st process locked upper side of clusterd index, and 2nd has locked down side. And both are waiting for opposite side to relese locked resourses.
And even if you revise whole script and apply table locks in every query (not really good approach by itself) then 1st process could lock one table in INNER JOIN query while 2nd process was locking another table in the same query.
And because you are trying to apply exclusive locks there is no way out.
Locks are not that simple to play with. Especially exxlusive ones.
And John, yes, you are absolutely right about my table naming.
The point was to have table with name ##[Procedure_Name]+'_Table'
I don't think any other SP would create or use a table with such name. No matter who has written it.
Except those dependable on SP "Procedure_Name". But it's all right, I would say.
_____________
Code for TallyGenerator
April 12, 2007 at 8:38 pm
Well, for the moment I am using the SP_GETAPPLOCK approach so that the stored procedure first has to obtain an exclusive session resource before proceeding with the processing. I am running some volume testing now to see if it works as hoped.
I am using the SP_GETAPPLOCK approach as opposed to Sergiy's suggestion to use a global temporary table mostly because that idea was presented first.
tj
April 13, 2007 at 9:08 am
Please keep us posted as to an update on how your volume testing goes with this approach. I will help those who read this thread going forward.
Thanks.
April 16, 2007 at 1:55 pm
Epilog:
I ended up using th sp_getapplock to force a serial execution of certain parts of my stored procedure. It seems to help quite a lot although I also modified a good deal of my SQL to prevent deadlock conditions as well. I now do some of the complex processing using temp tables and then copy the results to the permanent table with a simple insert query.
The one funny thing I noticed is that you can acquire the exclusive resource multiple times for a give session. If you do this, you have to release it the same number of times.
You can check for resource locks with a query like this - I think it can be improved, but it is a start.
USE Master
SELECT DISTINCT a.[Name] AS [Database],b.rsc_text AS [Lock Name],c.[ID] AS [syslocks ID],c.SPID AS [syslocks SPID],b.req_refcnt
FROM SYSLOCKS c
INNER JOIN SYSDATABASES a ON c.dbid=a.dbid
INNER JOIN SYSLOCKINFO b ON b.rsc_dbid=a.dbid AND b.req_spid=c.spid
WHERE b.rsc_type=10 AND c.Type=5 AND c.id<>0
The locking code looks like this:
DELCARE LastErrorCode INT
SET @LastErrorCode=-1 /* loop until lock is acquired. */
WHILE @LastErrorCode<0
BEGIN
EXEC @LastErrorCode = sp_getapplock @Resource = 'HE_EXPLODE',
@LockMode = 'Exclusive',
@LockOwner = 'Session',
@LockTimeout = 3000 /* retry at 3 second intervals */
END
....do serial processing here
EXEC sp_releaseapplock @Resource = 'HE_EXPLODE',
@LockOwner = 'Session'
April 16, 2007 at 2:14 pm
Not that I want this thread to go on anymore, but here is my last post. Yes - you are correct, the same spid can lock the same resource multiple times. This is by design and is the same behavior that a regular lock uses. It is because of this behavior that you are able to begin a transaction, modify records, and see the dirty pages before commiting them from within the same session (certainly important when you think about it).
One last comment on what you have for T-SQL, you are looping and retrying getting the lock. This is actually more resource intensive than letting SQL do the work by simply setting the lock timeout higher (or forever). It feels like you have bad hanging code, but it is supposed to work that way. You also run the risk of the locks being grabbed out of order. If you ask for a resource lock and let SQL wait for it, you will always get the resource lock in the order sessions request it. With a loop, another user could actually sneak in at the right moment and steal your resource leaving you waiting for their process to finish. For what you are doing, it probably will not make much difference, but it is important to note.
I'm glad you have something working and I look forward to your next problem (probably much more than you do).
April 16, 2007 at 2:26 pm
Michael,
Thanks for the observation on the merits of upping the timeout delay. I'll make that change forthwith setting it to something large. In practically every case, the using application will be done in around a second. There are a couple extreme cases where the lock might be held for a minute or two, but that will only happen a few times a day.
April 16, 2007 at 2:29 pm
For what it's worth also, the SYSLOCKS table should no longer be used as it has been replaced with SYSLOCKINFO. In your query, you are joining them both together. You should be able to get the same results back without having to use the SYSLOCKS table.
February 27, 2010 at 10:14 pm
Thanks SSCarpal Tunnel
I know more at tables ##
February 24, 2014 at 8:16 am
Excellent information. Minor point, according to Microsoft's documentation on sp_getapplock the error trap code should actually be as follows:
IF @Result < 0
BEGIN
ROLLBACK TRAN
RAISERROR('Procedure Already Running - Concurrent execution is not supported.',16,9)
RETURN(1)
END
sp_getapplock will return 0 if it locks immediately and 1 if it locks after waiting for other locks to clear. All failures will be negative values.
http://technet.microsoft.com/en-us/library/aa933410(v=sql.80).aspx
Viewing 13 posts - 16 through 27 (of 27 total)
You must be logged in to reply to this topic. Login to reply