January 3, 2012 at 4:03 am
Gianluca Sartori (1/3/2012)
Very popular in Oracle too (SELECT ... FOR UPDATE).SometimesOften devs do this in PL/SQL code used in forms and hold the lock until the form gets closed. :sick:
I know little of this Oracle product you mention, but does it also require an enclosing transaction to give scope to the update lock? If so, the developers are displaying a form for user input inside a transaction? Really?
January 3, 2012 at 4:28 am
SQL Kiwi (1/3/2012)
Gianluca Sartori (1/3/2012)
Very popular in Oracle too (SELECT ... FOR UPDATE).SometimesOften devs do this in PL/SQL code used in forms and hold the lock until the form gets closed. :sick:I know little of this Oracle product you mention, but does it also require an enclosing transaction to give scope to the update lock? If so, the developers are displaying a form for user input inside a transaction? Really?
I have seen that as well.
-- Gianluca Sartori
January 3, 2012 at 2:05 pm
I've read the whole thread. Very interesting discussion.
But I don't understand the claim that "Optimistic Locking" has no overhead.
What about all those rollback/undo errors in Oracle? Oh those fun days :w00t:
I personally don't miss Oracle compare to SQL Server 🙂
Keeping a linked list of all prior (needed) versions of a table row must inherently have some overhead ... right? And that's about the (only) way to provide a consistent row view as of the start time of the reading trans ... right? One really long-running read transaction can put a big strain on the instance (or "database", in the Oracle, not SQL, sense of that word).
I agree that RCS is a valuable additional option for locking/concurrency, but I don't like that fact that it is the only option in Oracle. Surely there must be times when a pessimistic locking model works better, and pessimistic locking is neeed ... ?
[And having to code everything as a cursor for Oracle gets old, although of course there is no overhead with Oracle "cursors" compared to SQL Server "cursors".]
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
January 3, 2012 at 7:26 pm
SQL Kiwi (1/3/2012)
Michael Valentine Jones (12/29/2011)
This code shows a simple way to eliminate those problems using a rowversion column in the table to make sure it has not changed since it was read and to eliminate the need to hold locks to prevent lost updates.Another popular solution is to wrap the read and write parts in a transaction, and take an UPDLOCK (indicating the intention to update the row later) on the read.
I would really stay away from this. Although SQL Server may honor the UPDLOCK, there is no promise that it will not escalate to a higher level lock, for example a table lock.
As for holding a transaction open waiting for user input, it ought to be obvious to everyone what a bad, bad idea that is.
Locks at whatever level are meant to "briefly" block other processes that are trying to read/update/delete the same row/page/extent/table, not to serve as application level logical locks.
My example works OK in a situation where someone pulls up a value on a screen, looks at it, goes to lunch and comes back, and then decides to update it. As long as no one has changed it already, everything works fine. If it has changed, then they can get an error message and try again. If there is really a need to lock data while waiting for user input, then that should be handled logically within the application, not by misusing a database level lock.
January 3, 2012 at 7:37 pm
ScottPletcher (1/3/2012)
I've read the whole thread. Very interesting discussion.But I don't understand the claim that "Optimistic Locking" has no overhead.
What about all those rollback/undo errors in Oracle? Oh those fun days :w00t:
I personally don't miss Oracle compare to SQL Server 🙂
Keeping a linked list of all prior (needed) versions of a table row must inherently have some overhead ... right? And that's about the (only) way to provide a consistent row view as of the start time of the reading trans ... right? One really long-running read transaction can put a big strain on the instance (or "database", in the Oracle, not SQL, sense of that word).
I agree that RCS is a valuable additional option for locking/concurrency, but I don't like that fact that it is the only option in Oracle. Surely there must be times when a pessimistic locking model works better, and pessimistic locking is neeed ... ?
[And having to code everything as a cursor for Oracle gets old, although of course there is no overhead with Oracle "cursors" compared to SQL Server "cursors".]
I don't think anyone has claimed that "Optimistic Locking" has no overhead. And I don't think that anyone has claimed that traditional blocking has no overhead or disadvantages.
My personal feeling is that it solves a lot of problems, especially with blocking of reporting queries and especially with deadlocking. I have never had bad results using it, and have made it more or less a standard for our databases.
Opinions vary, obviously.
January 3, 2012 at 7:39 pm
Kalen Delaney has a chart in her book "SQL Server 2008 Internals" that shows the difference of isolation level. I will prefer use of snapshot isolation vs read_committed_snapshot isolation.
On my many 3rd party databases, I have already fixed them on snapshot isolation.
See my SQLSaturday#57 presentation.
Jason
http://dbace.us
😛
January 3, 2012 at 8:45 pm
Michael Valentine Jones (1/3/2012)
SQL Kiwi (1/3/2012)
Michael Valentine Jones (12/29/2011)
This code shows a simple way to eliminate those problems using a rowversion column in the table to make sure it has not changed since it was read and to eliminate the need to hold locks to prevent lost updates.Another popular solution is to wrap the read and write parts in a transaction, and take an UPDLOCK (indicating the intention to update the row later) on the read.
I would really stay away from this. Although SQL Server may honor the UPDLOCK, there is no promise that it will not escalate to a higher level lock, for example a table lock.
Let me be clear that I am not advocating use of the UPDLOCK pattern when there might be a significant delay between reading and updating the data. So, leaving aside bizarre scenarios like going for lunch between reading and updating, using UPDLOCK to effectively serialize changes while not blocking concurrent readers is perfectly sound. Of course, we would normally write the whole requirement in a single UPDATE statement (in which case there is an implicit transaction, and the engine automatically takes the U locks on the read cursor access method for us, in order to minimize deadlocking), but there might be cases where it is necessary, or convenient, to split the read from the write and UPDLOCK works well there - again I emphasise that the transaction should be short. Proper use of the UPDLOCK pattern is nothing more than an explicit extension of the engine's behaviour with a single UPDATE statement.
The rowversion design is not perfect either: it adds 8 bytes to every row, overhead to every operation that requires a change to the stored rowversion value, and can complicate indexing. Also (taking the inventory example) the update would fail even if the quantity had *increased*, meaning we would have to go back and perform the read again. Also, rowversion is per-row, so there might have been changes to other attributes on the row, causing the rowversion check to fail, even though the quantity column was unaffected.
For sure there will be scenarios where either technique is more appropriate, assuming a row-versioning isolation level is not available for whatever reason.
edit: I suppose I should also state for the record that UPDLOCK is *always* honoured (it's an instruction, rather than a hint, in contrast to some others like ROWLOCK) and that possible escalation to a partition or table lock will just follow the usual rules, meaning it is extremely unlikely for singleton updates as in the working example.
January 3, 2012 at 8:51 pm
jswong05 (1/3/2012)
Kalen Delaney has a chart in her book "SQL Server 2008 Internals" that shows the difference of isolation level. I will prefer use of snapshot isolation vs read_committed_snapshot isolation.On my many 3rd party databases, I have already fixed them on snapshot isolation.
See my SQLSaturday#57 presentation.
Having read that book, you are no doubt aware of the significant drawbacks to using snapshot isolation (SI), and the reasons that RCSI is preferred over SI in the majority of real-world situations. As Kalen says on page 655, "In most situations, RCSI is recommended over Snapshot Isolation...". I wasn't at SQL Saturday 57, so I have no idea what you might or might not have had to say on the subject. No doubt SI works well for you, but without further details, it is hard to see how your post contributes much to the present discussion.
January 4, 2012 at 2:59 pm
David Moutray (12/27/2011)
I am trying to convince my managers to allow me to enable READ_COMMITTED_SNAPSHOT on the databases for our production data warehouse. I have two questions for you, my esteemed colleagues:
David, RCSI / SI absolutely should be the right way to go. Getting there from your current state is where the hard work is. If it's as big a mess as you seem to suggest then it may actually be easier to start from scratch. As you've already realised, just turning on RCSI isn't going to do much for you.
In the past, it was common enough to create data warehouses around daily (nightly) load intervals where it was assumed that no-one used the data during loads. Today, it is less acceptable to make users wait 24 hours for their data. If the required latency is measured in minutes or just a few hours, or if you have a global user base where usage follows the sun then there is no "load window". In my opinion therefore, RCSI should be the default assumption for a data warehouse in SQL Server. For new projects, even if the requirement allows for a load window, I would not wish to bake that requirement in as a fundamental limitation of the design. Instead, assume that the data will need to be both consistent and available and architect the solution on that basis.
January 4, 2012 at 7:16 pm
Starting from scratch ... that ... might be, ah ... difficult to sell to management.
The really, really bad queries I can rewrite (eventually). Refactoring is more practical than starting over.
I guess my main challenge is reprogramming the programmers. Every time I point out a query with performance problems, I get this knee-jerk response, "But it works".
Yeah, yeah, I know. Whatever you were about to say, I agree with you. My point is that this is a cultural issue. (Thank God it isn't a religious issue. Uh ... that came out weird.)
In defense of my co-workers, they are very, very busy. They work on a business-critical application. The business is very, very successful, and this application is a large part of that success. The business has grown tremendously and the business needs change rapidly.
In other words, there are valid reasons that things are in their current state. I would just like to help take my team to a new level.
Wow. Is it just me, or has this been a really, really active discussion!
January 5, 2012 at 6:16 am
It's been a good discussion and a busy one.
Sounds like you have a plan for moving forwards. I hope we were able to help in that.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
Viewing 11 posts - 61 through 70 (of 70 total)
You must be logged in to reply to this topic. Login to reply