October 23, 2003 at 2:45 am
We use pessimistic locking which is so embedded into the system that it is impossible to change now. This is achieved using a soft locking mechanism where we:
- Place the Customer ID into a supplementary CustomerLock table (it has a unique constraint on the field)
- If this fails, raise the error message to the user informing them that the record is already locked. Note that this happens rarely.
- Otherwise retrieve the customer details for the user.
- Once the user has saved the details and no longer requires the information we remove the entry from the CustomerLock table.
This obviously causes a reasonable bottleneck in our system that we would like to optimize. There a number of issues to bear in mind before looking at a better solution:
1. We tried using an UPDATE to a supplementary table. This was far slower. The insert/delete works much better.
2. We cannot use HOLDLOCK.
3. We cannot switch to Optimistic Locking using Tstamps/Binary compare, etc because pessimistic locking is too embedded into our system.
One thing that was puzzling was that we tried creating 10 CustomerLock tables (ie CustomerLock1, CustomerLock2, etc) and used the last digit of the CustomerID to decide which table to put the record into. We thought this would reduce the bottleneck into one table. Unfortunately this did not make any difference.
Would anyone have any ideas, please?
October 23, 2003 at 8:31 am
Have you tried adding a StatusID (Bit or Int) field to the Customer table (0 = Unlocked, 1 = Locked)?
Perhaps you could run a select from the Customer table where StatusID = 0 and if Null is returned then the application would know to tell the customer that the record is locked.
You could Update the StatusID field as necessary...
Troy Tabor
Web Applications Project Manger
Choice Solutions LLC
October 23, 2003 at 8:51 am
Thank you, Troy. We tried this and it works in simple test situations but not in production where many users attempt to access the same record. If or more users attempt to open the record at the same time, the flag system doesn't always work.
October 24, 2003 at 2:38 am
Have you tried using transactions?
If you begin a transaction when you read the data and comitt it or roll it back when the user has finished with the record you can prevent other users getting the record.
You can setup the transaction on an ADO.Connection. Not sure how this works with lots of users.
October 24, 2003 at 4:30 am
Your problem may be related to adjacent-key locking.
When you do a insert or delete, SQL locks the adjacent key as well as the record you are processing. This is part of the SQL Server index design, and is done to give a stable situation to allow leaf page pointers to be updated, etc. The locks are released when the update/delete transaction completes.
Obviously, keeping the lifespan of the insert/delete transaction as short as possible will reduce blocking of other transactions.
In your situation, it may be worth making the update of the CustomerLock table into a one-line transaction, that can COMMIT very quickly, or even use Auto Commit. The main design point is to eliminate any other SQL activity from the transaction that inserts or deletes from the lock table.
This will not eliminate the adjacent key locking (you need DB2 to do this...), but it will reduce the timespan that blocking could take place.
All information provided is a personal opinion that may not match reality.
Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.
When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara
October 24, 2003 at 4:43 am
Thank you, Ed. That may well explain the poor performance.
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply