March 28, 2003 at 12:08 pm
I am having a problem preventing Exclusive Page Locking during single Row INSERTs, UPDATEs, & DELETEs. I am using the WITH (ROWLOCK) hint in all my INSERT, UPDATE, & DELETE statements. The code is running thru a VB6 application using ADO and COM+/DTC to manage transactions. Each ADO Connection first runs a "SET TRANASACTION ISOLATION LEVEL READ COMMITTED" statement before running the Stored Proceedure. Every table has a Clustered Primary Key (which is also a IDENTITY INT Column). Most tables have 1 Unique NonClustered Constraint.
Is there another way to prevent Exclusive Page Locking? I would expect Exclusive Row locking for the Rows that I am modifying. The transactions are sometimes quite large consisting of 2500+ individual INSERTs, UPDATEs, & DELETEs in one big COM+ transaction (with about 95% being INSERTs).
Environment: SQL Server Box - SQL Server 2000 SP3, Windows 2000 SP3. COM+ Server - Windows 2000 SP3. Local clients - Windows 2000 Pro SP3. All have MDAC 2.7.
Any help and/or advice would be grately appreciated. Let me know if you need more info.
Thanks!
Mike
March 28, 2003 at 1:05 pm
If the update, insert, delete statements have an explicit where clause, such as where identityfield = 1, then you could expect row locking. If, on the other hand, you are doing something like Where identityfield between 1 and 1000, then I would expect page locking.
If your wrapping a lot of statements within a larger transaction, your locking can escalate where many row locks would have been on a single page, as well....
And finally, If most of these are insert statements, and your clustered index is on a naturally incrementing field, then all of these inserts are happening on the last data page, and will escalate to page locks because of that...
The information provided is good, but the most important part (the actual statements your running) has been omitted. How are your statements structured? IE. What does the where clause show....
Edited by - scorpion_66 on 03/28/2003 1:11:02 PM
March 28, 2003 at 1:10 pm
All UPDATEs & DELETEs have WHERE clause which explicitly use the Identity/Primary Key to identify the record. Also, all INSERTs, UPDATEs, & DELETEs only modify one record at a time...there are no multi-row INSERTs, UPDATEs, & DELETEs.
Example:
UPDATE Customers
SET CustomerName = @Param2,
CustomerType = @Param3
WHERE CustomerID = @Param1 (which is a CustomerID)
Note: The CustomerID field is a Identity field and the Clustered Primary Key. The 3 parameters come in from the stored procedure.
My guess is that SQL Server is using Page Locks for the INSERTS instaed of Row Locks even though I specify WITH (ROWLOCK) table hint.
Isn't the one of the points of using Locking Table Hints to prevent SQL Server from using automatic Lock Escalation in those statements? Unless the COM+ Transaction is overriding this and forcing SQL Server to escalate...
Edited by - Michael Fried on 03/28/2003 1:20:29 PM
March 28, 2003 at 1:18 pm
When you said "One big COM+ transaction" does that not mean a single transaction? Which means that even though the statements are single statements, they are not committed until the whole transaction is committed, which means that the locks are held for each row, and will escalate to a higher grained lock when the row threshhold is broken...
hence, page locks.....and could even escalate to table locks if the percentage of
modifications becomes high enough...
March 28, 2003 at 1:24 pm
There you go.....
Your customerid field is both an identity and a clustered index. This is a bad practice for table design. When you create a clustered index on a naturally incrementing field, all inserts happen on the last page, until it's full, then it splits, and the same thing happens over and over. The locking this causes is a concurrency killer. Also, Sql knows that if all the rows on a page will be locked, it can spend 96 bytes per row to do the locking, or a single 96 byte expense for the page lock. There is also the overhead to account for a sql has to keep track of these locks as well.
April 2, 2003 at 7:10 am
"When you create a clustered index on a naturally incrementing field, all inserts happen on the last page, until it's full, then it splits, and the same thing happens over and over."
Is the suggestion here to cluster a more natural index when identities are in use?
April 2, 2003 at 9:56 am
Absolutely. This will distribute the actual inserts across the table rather than causing them to all hit the last page, and increase concurrency in your inserts as well.
If your inserts are distributed, and your fill factor accommodates the amount of inserts happening, you will never have to wait for file growth, or page splits, etc...
Of course, this has to be weighed in the overall scheme of things, and you should take into account the amount of inserts versus the amount of selects, joins, etc...but in general, your tables should be designed so that inserts do not create "hot spots", if possible.
April 3, 2003 at 1:38 am
'. . . My guess is that SQL Server is using Page Locks for the INSERTS instaed of Row Locks even though I specify WITH (ROWLOCK) table hint.'
'. . . which means that the locks are held for each row, and will escalate to a higher grained lock when the row threshhold is broken...
hence, page locks.....'
I'm pretty convinced that page locks are NOT the issue, but Lock escalation to a Table lock could very well be.
This is because (mostly) X locks are escalated directly from Key to Table!
(Because you have a clustered index you don't get Row locks, but Key locks, but they will function just like row locks)
In my experience only an indexscan can result in an (index) Page lock (SQL Server 2000 SP2 and SP3 at least).
So the problem is likely to be a Table Lock problem. This can be easily reproed by running below script from two QA sessions simoultaniously:
declare @i int
set @i = 1000000
begin tran
while (@i > 0)
begin
insert into customer( CustName, CustType )
values( 'Customer' + cast( @i as varchar(7)), @i % 20 )
set @i = @i - 1
end
rollback tran
The table is:
create table customer
(CustID int identity(1,1) primary key,
CustName varchar(100),
CustType int
)
In the beginning the lock picture is:
51 10 1977058079 1 KEY (4c00a1c82f65) X GRANT
51 10 1977058079 1 KEY (0f0009182ece) X GRANT
51 10 1977058079 1 KEY (7e000d156b66) X GRANT
51 10 1977058079 1 KEY (a9007dfc1989) X GRANT
51 10 1977058079 1 KEY (4f008db8316f) X GRANT
51 10 1977058079 1 KEY (7000b88fd2df) X GRANT
53 10 0 0 DB S GRANT
53 1 85575343 0 TAB IS GRANT
54 10 1977058079 1 KEY (7700c2c1b66e) X GRANT
54 10 1977058079 1 KEY (c500388506df) X GRANT
54 10 1977058079 1 KEY (a900da73b0ac) X GRANT
54 10 1977058079 1 KEY (1d00fc686b38) X GRANT
54 10 1977058079 1 KEY (71001e9edd4b) X GRANT
54 10 1977058079 1 KEY (090003c2d2e6) X GRANT
54 10 1977058079 1 KEY (6500e1346495) X GRANT
When it eventually escalates the locks, this is the picture:
51 10 0 0 PAG 1:783 IX GRANT
51 10 0 0 PAG 1:782 IX GRANT
51 10 0 0 PAG 1:785 IX GRANT
51 10 0 0 PAG 1:784 IX GRANT
51 10 0 0 PAG 1:787 IX GRANT
51 10 0 0 PAG 1:786 IX GRANT
51 10 0 0 PAG 1:789 IX GRANT
51 10 0 0 PAG 1:788 IX GRANT
51 10 1977058079 0 TAB X GRANT
....
....
51 10 0 0 PAG 1:987 IX GRANT
51 10 0 0 PAG 1:988 IX GRANT
53 10 0 0 DB S GRANT
53 1 85575343 0 TAB IS GRANT
54 10 1977058079 0 TAB IX WAIT
You may argue that it COULD have used Page Lock escalation during my test, but given the fact that my table has several hundred rows per page, the 'Escalated to Page Locks' situation would save so many locks that this situation should have been stable for a long period of time. Not so!
According to Inside SQL Server 2000 (Kalen Delaney) the only sure way to eliminate Lock Escalations is to provide more server memory.
According to this excellent book, the server will lock escalate when more than 40% memory is used for locks and will consider lockescalation for all transactions holding more than 1250 locks.
For proofing that Lock Escalation is in deed ones problem one could run the SQL Server with TraceFlag 1211.
As for having an everly increasing (identity) column as the clustered key it does route all inserts to the very same page, which MAY be ill for concurrency and disk I/O reasons. But SQL Server 2000 have special tunings for handling the everly increasing key and uses a special page split algoritm in this case. Basically it avoids moving half the old rows to the new page, which is a waste because it wll tend to only fill pages half (because the next row has a higher key than the currently highest and so goes to the high page).
So don't go changing all your tables from 'Cluster on Identity' to 'Cluster on this random Key' without proofing which solution is the optimum. You may find cases where the 'Cluster on Identity' is a very good solution.
regards
jensk
April 3, 2003 at 12:56 pm
What you say is exactly true jensk. I gave an abbreviated explanation which could have used much explanation, which you provided. I assumed that the page locks he was seeing were the same ones which your trace information shows, and again, didn't bother to go into the details of that.
Also, while you are also correct about the algorithms to prevent the hot spots, it doesn't work nearly as well as advertised (test it out, you'll see what I mean, as the pages are still added to the end of the table), and it's much better practice to avoid it altogether if possible. And the concurrency issues you agree MAY happen, are of course, caused by locking, as that's the method of enforcing concurrency in SQL Server. And of course, I mentioned that it should be weighed in the overall scheme of things as it wasn't always the best way to go about it as well.
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply