October 28, 2010 at 10:33 am
Michael Valentine Jones (10/28/2010)
Ninja's_RGR'us (10/28/2010)
Michael Valentine Jones (10/28/2010)
Ken McKelvey (10/28/2010)
While snapshot isolation can be very useful, you should google 'write skew anomolies', check your code and do a lot of testing.SQL Server handles this the following way:
Understanding Row Versioning-Based Isolation Levels
http://msdn.microsoft.com/en-us/library/ms189050.aspx
"...
Transactions running under snapshot isolation take an optimistic approach to data modification by acquiring locks on data before performing the modification only to enforce constraints. Otherwise, locks are not acquired on data until the data is to be modified. When a data row meets the update criteria, the snapshot transaction verifies that the data row has not been modified by a concurrent transaction that committed after the snapshot transaction began. If the data row has been modified outside of the snapshot transaction, an update conflict occurs and the snapshot transaction is terminated. The update conflict is handled by the Database Engine and there is no way to disable the update conflict detection.
..."
So let's see if I got this straight. If I do read-only work in the conflicting tables. So I don't have to worry about 'write skew anomalies'?
I only have access to insert/update data into 3 tables dedicated to my application. Everything else is read-only and or dealt straight in MS Dynamics Nav.
'write skew anomalies' are only associated with update operations where two different transactions attempt to update the same row at the same time.
Awesome... that takes care of "my" problems. But what happens between 2+ connection from Nav that tries to update the same table?
It sounds to me that if connection 1 takes a table lock which currently stops everything from reading/writing from the tables, we can still have write skew anomalies from connections 2 - 20! This is a very real scenario as those updates usually takes quite a few minutes to complete.
So that's basically an automatic no-go, it's bad enough to have dirty reads, but it's worse to have dirty data.
October 28, 2010 at 10:42 am
Ninja's_RGR'us (10/28/2010)
Michael Valentine Jones (10/28/2010)
Ninja's_RGR'us (10/28/2010)
Michael Valentine Jones (10/28/2010)
Ken McKelvey (10/28/2010)
While snapshot isolation can be very useful, you should google 'write skew anomolies', check your code and do a lot of testing.SQL Server handles this the following way:
Understanding Row Versioning-Based Isolation Levels
http://msdn.microsoft.com/en-us/library/ms189050.aspx
"...
Transactions running under snapshot isolation take an optimistic approach to data modification by acquiring locks on data before performing the modification only to enforce constraints. Otherwise, locks are not acquired on data until the data is to be modified. When a data row meets the update criteria, the snapshot transaction verifies that the data row has not been modified by a concurrent transaction that committed after the snapshot transaction began. If the data row has been modified outside of the snapshot transaction, an update conflict occurs and the snapshot transaction is terminated. The update conflict is handled by the Database Engine and there is no way to disable the update conflict detection.
..."
So let's see if I got this straight. If I do read-only work in the conflicting tables. So I don't have to worry about 'write skew anomalies'?
I only have access to insert/update data into 3 tables dedicated to my application. Everything else is read-only and or dealt straight in MS Dynamics Nav.
'write skew anomalies' are only associated with update operations where two different transactions attempt to update the same row at the same time.
Awesome... that takes care of "my" problems. But what happens between 2+ connection from Nav that tries to update the same table?
It sounds to me that if connection 1 takes a table lock which currently stops everything from reading/writing from the tables, we can still have write skew anomalies from connections 2 - 20! This is a very real scenario as those updates usually takes quite a few minutes to complete.
So that's basically an automatic no-go, it's bad enough to have dirty reads, but it's worse to have dirty data.
You can't have write skew anomalies, because automatically SQL Server prevents that, and that cannot be disabled. What you can have are transactions that fail due to an attempt to update the same data in concurrent transactions.
If you enable the database for SNAPSHOT isolation, but do not enable read_committed_snapshot, updates will operate as before and block each other. You can set your queries to manually set SNAPSHOT isolation so that you are not blocked. Your data will be transactionally consistant as of the start of the transaction.
October 28, 2010 at 10:53 am
Michael Valentine Jones (10/28/2010)
Ninja's_RGR'us (10/28/2010)
Michael Valentine Jones (10/28/2010)
Ninja's_RGR'us (10/28/2010)
Michael Valentine Jones (10/28/2010)
Ken McKelvey (10/28/2010)
While snapshot isolation can be very useful, you should google 'write skew anomolies', check your code and do a lot of testing.SQL Server handles this the following way:
Understanding Row Versioning-Based Isolation Levels
http://msdn.microsoft.com/en-us/library/ms189050.aspx
"...
Transactions running under snapshot isolation take an optimistic approach to data modification by acquiring locks on data before performing the modification only to enforce constraints. Otherwise, locks are not acquired on data until the data is to be modified. When a data row meets the update criteria, the snapshot transaction verifies that the data row has not been modified by a concurrent transaction that committed after the snapshot transaction began. If the data row has been modified outside of the snapshot transaction, an update conflict occurs and the snapshot transaction is terminated. The update conflict is handled by the Database Engine and there is no way to disable the update conflict detection.
..."
So let's see if I got this straight. If I do read-only work in the conflicting tables. So I don't have to worry about 'write skew anomalies'?
I only have access to insert/update data into 3 tables dedicated to my application. Everything else is read-only and or dealt straight in MS Dynamics Nav.
'write skew anomalies' are only associated with update operations where two different transactions attempt to update the same row at the same time.
Awesome... that takes care of "my" problems. But what happens between 2+ connection from Nav that tries to update the same table?
It sounds to me that if connection 1 takes a table lock which currently stops everything from reading/writing from the tables, we can still have write skew anomalies from connections 2 - 20! This is a very real scenario as those updates usually takes quite a few minutes to complete.
So that's basically an automatic no-go, it's bad enough to have dirty reads, but it's worse to have dirty data.
You can't have write skew anomalies, because automatically SQL Server prevents that, and that cannot be disabled. What you can have are transactions that fail due to an attempt to update the same data in concurrent transactions.
If you enable the database for SNAPSHOT isolation, but do not enable read_committed_snapshot, updates will operate as before and block each other. You can set your queries to manually set SNAPSHOT isolation so that you are not blocked. Your data will be transactionally consistant as of the start of the transaction.
Awesome... that's exactly what I was about to ask :w00t:.
So I guess the only thing I need to figure out now is how much extra processing power I need to support this in tempdb...
How can I really figure out how much more processing power to give to tempdb? I'm thinking replaying a trace, but then what do I need to check to reproduce in prod?
We have a highly transactional system. Millions of writes queries daily (RBAR).
tempdb is not used for reporting or highly complex queries with massive aggregations.
October 29, 2010 at 1:30 am
The snapshot isolation level didn't work for the foreign key blocking. But we are still going to do some tests with it for the several other blocking situations on which it might help.
The solution of adding an unique index on the ID (column a in my example) did work!! so I'm keeping this index for the problems with the foreign key blocking.
July 15, 2019 at 9:14 pm
How did you get your query plan to use the new unique nonclustered index instead of the primary key (which is also a unique index on column a)?
Viewing 5 posts - 16 through 19 (of 19 total)
You must be logged in to reply to this topic. Login to reply