November 30, 2015 at 6:53 am
Hi all.
Long time lurker here, but never plucked up the courage to ask a question until now. This should be a simple one for the experts on here...
Basically my question is precisely when (during a query's execution) are locks actually issued?
So if, for example, I have a reasonable sized table, say 100,000 records (i.e. Not one so small that the whole table will just be read for any query) and I run a simple select query which matches one row I would expect that to take out a shared row lock (briefly) and return a single row.
But how does SQL Server ensure that that row's data hasn't been changed (by another concurrent session) before the lock is issued that might invalidate it from the select query's criteria?
Presumably it identifies a row as matching the select query before locks are issued (otherwise it wouldn't be able to issue row locks, as it wouldn't know which rows to lock), but does it check the row's criteria a second time before returning it to the client to make sure that an update hasn't squeezed in between identification of the row as valid and locking it.
Or have I (most likely) missed something?
Thanks,
November 30, 2015 at 7:03 am
Rows are locked before being read. A row has to be read to see if it qualifies for a query predicate. If it does, then it's returned with no second read necessary.
This is why a query returning a single row can end up locking the entire table if there's no useful index, because every row in the table had to be read to see if it qualifies, which means every row had to be locked, and the efficient way to do that is with a table lock.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 2 posts - 1 through 1 (of 1 total)
You must be logged in to reply to this topic. Login to reply