lock a subset of a table from update or insert

  • I've got an application that needs to use data from a detail table to update another table.  Then the rows in the detail need to be updated themselves.  But, I need to make sure no new records are added that meet the criteria (the group by) in the detail.  What I conceptually want to do is lock the detail from update or insert by a criteria. For example:

    begin transaction

    lock all rows in tableA where vendor = xyz from update or insert

    Summarize data (read) in tableA where vendor = xyz

    update tableB

    update tableA where vendor = xyz

    commit transaction

    I was hoping someone could give me some ideas.  The key here is that I want to insure no inserts are done to tableA for vendor xyz after the summarize is done until the transaction is completed.  Currenlty the only way I know to insure no inserts are done is to lock the table.

  • Is the purpose of locking the recordset in TableA so that you ensure that only the rows that have been summarized into TableB get updated or flaged in TableA or is there more to it than this?

    John Rowan

    ======================================================
    ======================================================
    Forum Etiquette: How to post data/code on a forum to get the best help[/url] - by Jeff Moden

  • Define an ISOLATION LEVEL at the beginning of your stored procedure.

    An isolation level determines the degree of isolation of data between

    concurrent transactions. The default SQL Server isolation level is

    Read Committed. Here are the other isolation levels (in the ascending

    order of isolation): Read Uncommitted, Read Committed, Repeatable

    Read, Serializable.

    Sintax

    SET [SESSION | GLOBAL] TRANSACTION ISOLATION LEVEL
                           {READ UNCOMMITTED | READ COMMITTED

                            | REPEATABLE READ | SERIALIZABLE}

     

     

     

  • Yes, the idea is that only records that got summarized get updated.  With no new inserts that match either.

     

  • What I don't understand is how a transaction isolation level will prevent inserts into tableA unless TableA is completely locked.  The best I can do on inserts is assume that inserts would be blocked if I locked index records with certain keys.  But I don't know how I guarantee that.  I can lock records that are there but haven't ever tried to lock records from insert in a transaction.

    At this point I think we either have to rewrite the logic of the processing to be more involved to ensure I'm only summarizing the data that has been updated (in other words do the update first then the summarization) or add some sort of locking mechanism built into the code so that when this process runs, the code that might do the inserts is blocked from executing.

     

     

  • If problem is excluding new inserts, then I would store the ID's of the summarized rows from table A somewhere (in a temp table for example) or use a status column Process (bit data type - Yes/No; at the start of the processing mark all rows to be processed). Then add a condition that works with this, like:

    update tableA where vendor = xyz AND Process = 1

    (probably followed by resetting the Process to 0 so that the rows are not processed over and over... but that depends on your DB design... maybe it is just a staging table and rows are deleted when processed)

    or

    update A

    from tableA A

    join #summarized s on s.ID=A.ID

    where A.vendor = xyz --this could be superfluous now you have the temptable

    This way you can ensure that newly inserted rows will not match the criteria for insert/update.

  • I think locking is the only way to do exactly what you describe for all SQL statements. But the temp table solution might be better. If you do need to go down this road, and since you need to prevent phantoms (new records which appear only in the second - update - statement) as well as nonrepeatable reads (old records which appear only in the first statement), you can use the WITH (SERIALIZABLE) table hint.

    If
    a. every column from the table that is referenced by the WHERE clause (and isn't redundant) is the leading column of an index
    and
    b. the expression in which the column appears allows an index seek to be used,

    then this hint will cause key-range locks to be taken and held for the duration of the transaction. This will block inserts and updates which set a value in that range, as well as deletes and updates which would move a value out of that range.

    If some expression in the where clause is evaluable only by reference to a column on the table and if that column isn't indexed, then coarser-grained locks will be used to achieve the required level of isolation. The coarser-grained locks (which I believe would have to be table locks) will result in lower concurrency.

    begin transaction
    --lock key range in index A where vendor = xyz from insert
    select count(*) from tableA with(serializable)
    where vendor = 'xyz'
    update tableB
    update tableA where vendor = xyz
    commit transaction

    Tim Wilkinson

    "If it doesn't work in practice, you're using the wrong theory"
    - Immanuel Kant

  • Both Vladan and Tim have good suggeestions.  The reason I asked my first question is because I had a couple of solutions in mind.  One being the temp-table solution that Vladan mentions.  Store your key values from TableA in a temp-table and then, after updating TableB, update TableA and INNER JOIN to the temp-table.  A second solution would be to update TableB and then update the rows in TableA and INNER JOIN to TableB WHERE TableB.<Column/s> = Summary Values.  Does this make sense? In other words, you already know what the summary values are as you have updated TableB so only update TableA rows WHERE TableB's summary rows have the new values in them. 

    John Rowan

    ======================================================
    ======================================================
    Forum Etiquette: How to post data/code on a forum to get the best help[/url] - by Jeff Moden

  • Thank you for all the ideas. 

    I considered the locking based on an index, but I just find that a little too brittle.  If indexes are changed, then the app might have data integrity errors.  It's not as apparent that you may have issues such as dropping a foreign key or something like that would be.  I liked what it did - it was what I wanted. 

    So, I was ready to go with a more elaborate update that involved saving keys or changing the way updates were done and actually having to read TableA more than once, but the developer insists that for this application, locking the table will be sufficient.  He doesn't want to go to the trouble to write the code required to make it scalable because he believes this app doesn't require it.   If the table is busy and the exclusive lock can't be granted, the client trying to do this one process will time-out.  I don't like the idea because it is extremely not scalable, but for this app, it might be sufficient.  At least I know what the options are when it causes issues and has to be rewritten.

     

  • >If indexes are changed, then the app might have data integrity errors

    Not having a usable index will just result in table locks rather than key range locks. It won't decrease the isolation level.

    Tim Wilkinson

    "If it doesn't work in practice, you're using the wrong theory"
    - Immanuel Kant

  • If I understand the problem correctly... this might be a good job for a trigger.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply