August 29, 2012 at 6:58 am
....
IF YOU DON'T FEED A TROLL EVENTUALLY IT WILL STARVE TO DEATH AND GO AWAY! LET THIS GUY BE WRONG AND MOVE ON! ALL HE IS DOING IS WASTING YOUR TIME!
+100000000000
However, this thread is almost like a thriller, I would join the man who offered to sell pop-corn here for the viewers, his venture is most likely to succeed!
Ice-cream or drinks anyone?
It will continue...
And, just to add fuel to the fire, Sergiy, the non-update updates do not make changes
to the mdf files and do not perform physical I/O disk operations in all cases except ones mentions by previous posters (eg. cluster key non-update update). It was well explained why by the local gurus, but your failure to understand has nothing to do with them. I believe that your style of arguing is very common for the place you came from (and, to be honest , I do fall sometimes to this sort of way myself), but it's considered to be a rude style everywhere regardless of culture.
Good luck! I'm looking forward for the next series... :hehe:
August 29, 2012 at 7:08 am
Sergiy (8/28/2012)
So, when exactly an exclusive (X) lock is taken on a row? Why X lock are taken on the rows when IX would be perfectly enough in a situation when no actual writing is happening? Why Server uses IX on page level (pages are possibly changed by the update, but we don't know for sure yet) but refuses to do the same on row level?
SQL Server can't take row-level IX locks because they don't exist. Intent locks exist to indicate that other locks exist further down the lock hierarchy. Since row-level is the lowest level of that hierarchy, what would an IX row lock mean?
The real question I think you are asking is why SQL Server bothers taking exclusive locks at all, if the data isn't going to change (and remember the exclusive lock will not necessarily be a row lock, X locks can be placed on pages and tables too). You have to remember that locking is an implementation detail; the goal is to implement the semantics guaranteed by whatever transaction isolation level is in effect at the time. Which locks get taken and when is totally up to SQL Server, so long as the resulting behaviour matches that expected by the isolation level in all possible circumstances.
This goes back to what Clare was saying earlier about 'logical updates'. If transaction A logically updates a row (even if the value does not change) the expected behaviour is that transaction B cannot read that data (under the default isolation level, READ COMMITTED) until transaction A commits or rolls back. If SQL Server did not take a lock (because the value isn't changing), transaction B would not block as it should. SQL Server contains many performance optimizations, but they must always preserve technically correct behaviour.
P.S. Respectfully, to everyone shouting at Sergiy: he may have an apparently combative style, but I'm happy to continue to answer questions as best I can. The dog-pile isn't helping anyone very much really. Thank you.
August 29, 2012 at 9:42 am
...
P.S. Respectfully, to everyone shouting at Sergiy: he may have an apparently combative style...
I wouldn't call it shouting as such. But that exactly style makes this thread a real thriller!
A bit of blood and some AK-47 pictures would make it even more exciting!
BTW, reading some interesting discussion and details about locking mechanisms of SQL server makes this movie worth watching... :hehe:
August 29, 2012 at 10:07 am
I haven’t seen anything where Microsoft promises that the discussed behavior will be always observed, now or in the future.
I haven’t seen anything that proves that the discussed behavior will happen in all cases. In other words, there may be specific conditions where it does not happen that way, and I don’t see how anyone can prove otherwise without access to the storage engine source code.
In light of that, it makes a lot of sense to always code your update statements with conditions that verify that the rows actually need to be updated, especially since there is other overhead, like inserting into the transaction log and update triggers firing.
August 29, 2012 at 11:29 am
...
In light of that, it makes a lot of sense to always code your update statements with conditions that verify that the rows actually need to be updated, especially since there is other overhead, like inserting into the transaction log and update triggers firing.
I would probably agree on that with one small addition: such verification should be made in a client app not in the update statement itself, otherwise you may add performance overhead when checking values of non-indexed columns. So, this kind of verification is done by n-hibernate and other ORM solutions.
August 29, 2012 at 1:29 pm
Sergiy (8/28/2012)
Should I say - you turned it that way?
In my opinion it happened long before my involvement. I tried to bring attention to it while clarifying your confusion between the storage components and the query processor, and it's gone downhill from there.
Due to Paul's patience and request, I'll refrain from continuing the majority of this dicussion here. Sergiy, if you wish, we can continue this in PMs, but I'd like to address two points before I go. You're welcome to the last word.
Or do you consider it allowed for the ones you personally admire?
The people I admire here won't be throwing around comments about stupidity.
You forget about fragmentation cases.
Change the name column definition to varchar(8000).
Add 2 more columns. I used VARCHAR(8000) and VARBINARY (8000) and populate it with REVERSE(NaME) AND convert(VARBINARY(8000), Name)
After completing several updates you'll find out that a single row update locks 2 or 3 pages with an IX lock.
You're talking about Row overflow allocation pages, which were introduced in 08 05. For the curious: http://msdn.microsoft.com/en-us/library/ms186981(v=sql.105).aspx
For the columns that can be treated as LOB data (Varchar, nVarchar, Varbinary, and SQL_Variant) these can overflow the row like LOBs. I haven't done enough research via DBCC page to determine if these pages are shared between rows or not but I believe they are. With that in mind, you're correct, a row will fit in multiple pages treated similarly to LOB data, as long as these columns aren't part of the clustered index.
My apologies for the misunderstanding of what you were trying to present with that comment.
Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.
For better assistance in answering your questions[/url] | Forum Netiquette
For index/tuning help, follow these directions.[/url] |Tally Tables[/url]
Twitter: @AnyWayDBA
September 6, 2012 at 9:01 pm
SQL Kiwi (8/29/2012)
SQL Server can't take row-level IX locks because they don't exist. Intent locks exist to indicate that other locks exist further down the lock hierarchy. Since row-level is the lowest level of that hierarchy, what would an IX row lock mean?
That must be done like that for a reason.
Only reason I could see is there there is not "possible writing" on row level.
A row is either being read (S lock) or is being written (X lock).
And any MS document about locking does not allow double reading:
X locks applied when data are being changed.
The real question I think you are asking is why SQL Server bothers taking exclusive locks at all, if the data isn't going to change (and remember the exclusive lock will not necessarily be a row lock, X locks can be placed on pages and tables too). You have to remember that locking is an implementation detail; the goal is to implement the semantics guaranteed by whatever transaction isolation level is in effect at the time. Which locks get taken and when is totally up to SQL Server, so long as the resulting behaviour matches that expected by the isolation level in all possible circumstances.
"I've got no reasonable explanation" would be much shorter
This goes back to what Clare was saying earlier about 'logical updates'. If transaction A logically updates a row (even if the value does not change) the expected behaviour is that transaction B cannot read that data (under the default isolation level, READ COMMITTED) until transaction A commits or rolls back. If SQL Server did not take a lock (because the value isn't changing), transaction B would not block as it should. SQL Server contains many performance optimizations, but they must always preserve technically correct behaviour.
Shared lock prevents a record from being updated as successfully as X lock.
S locks applied when a query checks for values being actually changed.
According to your explanation the events happening on the background are the same in both cases, at least in the case of BINARY value update.
So, "technically correct behaviour" is preserved successfully by both types of locks.
But for some reason SQL Server prefers much more expencive X lock for "non-updating updates", when it's perfectly OK with with S locks for WHERE Value<> @Value queries.
Why would be that?
_____________
Code for TallyGenerator
September 6, 2012 at 9:22 pm
Eugene Elutin (8/29/2012)
I would probably agree on that with one small addition: such verification should be made in a client app not in the update statement itself, otherwise you may add performance overhead when checking values of non-indexed columns. So, this kind of verification is done by n-hibernate and other ORM solutions.
By the time the client app has figured out the situation and comes back to the db with a certain UPDATE query the situation may have been already changed by another query invoked from another session of the same or even differen application.
So, verifications in client apps don't make much sense, unless you open and close explicit transactions from the app.
But then you come with whole lot of issues caused by lost connections, delayed response, hanging client app, etc., which cause transaction to stay open an lock the resorces for indefinite time.
I can tell you more.
In properly designed systems UPDATE queries do actually check if the value was changed sinse it's was read to the memory, and return a request for the conflict resolution if such an unfortunate event has happened.
That's why using procedures instead of direct queries makes sence not only for security but for logical reasons as well.
P.S. It's also makes a lot of sense not to use UPDATE at all. Because it's a pure evil creature.
But, as I understand, you're an "ETL guy", so please don't kill me!!!
Just yet. Until the fun is over....
🙂
_____________
Code for TallyGenerator
September 6, 2012 at 10:05 pm
Sergiy (9/6/2012)
S locks applied when a query checks for values being actually changed.
Not necessarily. Locking is an implementation detail. SQL Server can decide for itself what locks to take and when. I wrote a separate post about this: http://bit.ly/MissingSharedLocks
Why would be that?
Taking a shared lock would not give the same behaviour as taking an exclusive lock.
September 6, 2012 at 10:31 pm
Sergiy (9/6/2012)
In properly designed systems UPDATE queries do actually check if the value was changed sinse it's was read to the memory, and return a request for the conflict resolution if such an unfortunate event has happened.That's why using procedures instead of direct queries makes sence not only for security but for logical reasons as well.
Most systems I have worked with that require high-concurrency UPDATEs have used a rowversion column or the snapshot isolation level. Neither of these require the use of procedures.
P.S. It's also makes a lot of sense not to use UPDATE at all. Because it's a pure evil creature.
What do you mean by this?
September 6, 2012 at 11:00 pm
SQL Kiwi (9/6/2012)
Taking a shared lock would not give the same behaviour as taking an exclusive lock.
Can you bring sopme details please?
_____________
Code for TallyGenerator
September 6, 2012 at 11:03 pm
SQL Kiwi (9/6/2012)
What do you mean by this?
By updating you're losing data.
You might think you don't need it anymore, but still - you're losing data previosly recorded in the database.
Which is pure evil behaviour for any "data keeper" which we all supposed to be.
🙂
_____________
Code for TallyGenerator
September 6, 2012 at 11:06 pm
Sergiy (9/6/2012)
SQL Kiwi (9/6/2012)
Taking a shared lock would not give the same behaviour as taking an exclusive lock.Can you bring sopme details please?
The two lock types are different, and so have different effects. An exclusive lock might block concurrent reads, for example.
September 6, 2012 at 11:08 pm
SQL Kiwi (9/6/2012)
Sergiy (9/6/2012)
SQL Kiwi (9/6/2012)
Taking a shared lock would not give the same behaviour as taking an exclusive lock.Can you bring sopme details please?
The two lock types are different, and so have different effects. An exclusive lock might block concurrent reads, for example.
But why do you need to block concurent reads if the data is not actually being changed?
_____________
Code for TallyGenerator
September 6, 2012 at 11:13 pm
Sergiy (9/6/2012)
SQL Kiwi (9/6/2012)
What do you mean by this?By updating you're losing data. You might think you don't need it anymore, but still - you're losing data previosly recorded in the database.
I look forward to Microsoft following your thinking and removing the UPDATE command from a future version then 😀
Viewing 15 posts - 91 through 105 (of 159 total)
You must be logged in to reply to this topic. Login to reply