December 3, 2006 at 2:45 pm
Hopefully I can explain this clearly enough.
I have a simple table
recid integer
number varchar
status varchar
I have (up to 8) processes that are going to query this table simultaneously.
I want each process to get its own row. Once it gets that row it will process the number,
update the record with a status and go back for another number. Unfortunately I have
thought myself so far down the wrong path I can't seem to get back on the right path.
i.e.
first run
process 1 -> recid 1
process 2 -> recid 2
.
.
process N -> recid n
Process 3 finishes first, so it goes to the database and it should retrieve record N+1
December 4, 2006 at 3:25 am
Lets say status=0 means its available and status=1 means its no longer available. Then the query 'select min(recid) where status=0' should do the trick.
You would then do an update of that record like 'update ... set status=1 where recid=N and status=0' where N is the value you got from the select above. You should then examine the number of records updated to determine if you actually got the record or if someone snuck in before.
December 4, 2006 at 3:48 am
You will need to use a transaction and have an UNIQUE CONSTRAINT/INDEX on Number. Something like:
SET XACT_ABORT ON -- or add your own error handling
DECLARE @NextNumber int
BEGIN TRANSACTION
-- This will get and lock the MIN unlocked number with status of zero.
-- If you want to enforce strict sequential processing of number then get rid of the READPAST.
-- (Other processes will then wait for locks to be released.)
-- The UNIQUE CONSTRAINT should ensure a ROWLOCK.
SELECT @NextRecID = MIN(number)
FROM YourTable WITH (READPAST, UPDLOCK)
WHERE status = 0
-- Do your processing
UPDATE YourTable
SET status = 1
WHERE status = 0 AND number = @NextNumber
COMMIT -- end transaction
Edit: Probably best to make the UNIQUE CONSTRAINT status, number so that it covers the query.
December 4, 2006 at 6:45 am
What is forbiding you of using a set based solution and update all rows... or all 8 batches of rows in a single transaction ?
December 4, 2006 at 11:36 am
If you really must have separate queries updateing the a single record in the table simultaneuosly, for the status field, instead of giving it a status of say 1 for in process, set the status = @@SPID. Then at any point in the process, the query running can find the process that it is using without maintaining a hard lock on the record.
Status values:
This can also be used to find processes that died midway and reset them to unprocessed. For this reason, I would include a datefield to indicate when the processing started.
December 4, 2006 at 5:43 pm
I figure you're in a multi user environment, hence the need for record locking and one record processing per client (8 clients?).
Using @@SPID might not be a solution as the processing may occur on the clientside, there by generating a new SPID when reconnected.
I would go for the Transaction option as pointed out by Ken but i would move the processing out of the transaction if this processing is at the clientside.
DECLARE @RecID int
BEGIN TRANSACTION
SELECT @RecID = min(Recid)
FROM YourTable WHERE status = 0
UPDATE YourTable
SET status = 1
WHERE status = 0 AND Recid = @RecID
COMMIT -- end transaction
-- Do your processing HERE (use @RecID to process number in your application)
-- On Error, don't forget to revert the status back to 0
December 4, 2006 at 8:19 pm
My apologies..
In my haste to get the question out I forgot an important piece, thanks to Ninja's_RGR'us for reminding me.
The query is part of an IVR system, there are 8-channels in the ivr that will be hitting the DB.
I am able to pass in 1 variable, a process # (1-8).
I will look over your responses in the morning, thank you all though.
December 4, 2006 at 11:39 pm
IVR or not IVR, set based or not set based, it's still hard see the real problem. Thus in your description you mention a single table. Who/what adds rows to it. I can see who has to update the table. Who/what deletes rows from the table - if at all. Give us a clue to the full dynamics including the frequency with which this table will get hit.
Oh yes, what is meant by 'set based' as a solution to a problem which I don't completely understand?
December 5, 2006 at 6:38 am
What I was suggesting with a set based aproach was to process all rows with a single update statement. Or if the operation was too big, then break it into smaller transactions (similar to the problem where you need to delete 100 M rows from a 400 M rows table without shutting down the application). This solution could use a single thread or multi thread if a much more complexe operation had to be done on the client side.
December 5, 2006 at 6:54 am
I have seen your and other people's comments regarding breaking up big operations. On the one hand I'm anxious to try this one day, on the other hand I hope I never have to deal with problems involving 400 M rows (although Netflix's 100M records are keeping be busy).
But if I understand Shawn's problem correctly, the simple table he his is referrring to should never get very big because it contains outstanding requests which these 8 processes should be taking care of. And once a process has taken care of whatever it has to take care of, it should delete the corresponding row and possibly register some information in another table. Thus the table is a simple kind of queue manager of outstand requests and should be kept small.
December 5, 2006 at 7:12 am
I keep rereading this thread and I still don't see it. But I guess it would make sens and since I have 0 experience with queue I'll take a step back on this one .
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply