June 14, 2009 at 6:06 pm
DBCC IND just waits when you run it on a table which has a row locked; This is a problem when I'm trying to find out which page/row IDs are locked and what data is held in that page/rowID.
Does anybody know a way of running 'DBCC IND' with the equivalent of a "Dirty Read" to make it return results even when there are locks in operation?
July 10, 2009 at 2:01 am
Care to tell us why you're using undocumented DBCC commands?
What's the business case behind this requirement?
ML
---
Matija Lah, SQL Server MVP
http://milambda.blogspot.com
July 10, 2009 at 7:11 am
Matija Lah (7/10/2009)
Care to tell us why you're using undocumented DBCC commands?What's the business case behind this requirement?
That's possibly the least helpful reply I have seen today. I mean, if you're not going to help, why post?
DBCC IND sure is officially undocumented but it's hardly secret
As for a business case, who cares? What possible difference does it make?
'cb' is trying to identify the data on the row/page associated with a lock - a fairly common wish. 'Care to provide' a documented way to do this Matija?
@cb:
You could look in the sys.dm_tran_locks dynamic management view. (See Books Online for full details)
That will show you the locks. The resource_description column will contain page information for PAGE and RID locks. It will also identify the row number on the page for RID locks. For KEY locks, it is a bit more difficult since we just get the hash of the lock resource. This is the same hash that you see if you run:
SELECT PK, %%lockres%% as lock_hash
FROM table WITH (READUNCOMMITTED)
...where PK is the primary key column, or other row identifier. You may need to use an index hint to ensure you access the right index. Match the lock_hash to the KEY lock hash in the dynamic view, and you have the answer.
For the PAGE and RID locks, you need to use DBCC PAGE, last parameter = 3.
Lots of details on DBCC IND and DBCC PAGE in the link above marked 'secret'
:laugh:
Paul
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
July 10, 2009 at 9:05 am
Paul White (7/10/2009)
...As for a business case, who cares? What possible difference does it make?...
:laugh:
Paul
Hi Paul, I heartily agree with your sentiment. Nothing wrong with using undocumented features, and the business should be happy we're making their queries run faster, as long as we don't modify the data. 😀
CB, just two caveats:
1) Test thoroughly any undocumented features in a test/qa environment to make sure it's behaving how you'd like (as there is no official stand on how it's supposed to behave), and
2) be prepared in case Microsoft suddenly removes an undocumented feature in the next version of SQL Server (and sometimes even in the next Service Pack)
Cheers,
Gaby
Gaby________________________________________________________________"In theory, theory and practice are the same. In practice, they are not." - Albert Einstein
July 10, 2009 at 9:52 am
Gaby Abed (7/10/2009)
2) be prepared in case Microsoft suddenly removes an undocumented feature in the next version of SQL Server (and sometimes even in the next Service Pack)
Shoot.... be prepared for changes even to fully documented and commonly used features. For example, the previously unannounced security changes they made to things like sp_MakeWebTask from 2k sp3 to sp4 blew a lot of folks code out of the water. How about the changes they made to CASE where the whole thing get's evaluated so you can no longer use it directly to test for numerics or dates?
Unannounced changes are not reserved to only documented features.
--Jeff Moden
Change is inevitable... Change for the better is not.
July 12, 2009 at 3:08 pm
Thanx for your suggestions!
I'm glad it doesn't appear to be a crime to use undocumented features after all!
BACKGROUND:
I'm working with a 3rd party database on a finance/manufacturing system that I can't change.
All users log into the Finance system individually, but the finance system uses one generic SQL login, so I can't tell who actually has a lock open from SQL.
I have clients who "bolt-on" stuff to the database, usually via triggers. Sometimes however, they cause queues of locks. Also, sometimes users leave locks open e.g. when editing the customer master table, get distracted, go to lunch/toilet/home etc.
I don't want to just kill the SPID/PID, it doesn't help find the actual reason for a log-jam of queued locks.
I want to know the primary key of the locked row (its only row locks that are a problem); master.dbo.syslockinfo.rsc_text will tell me that the locked resource is 1:7548:0, but when I shout across the room "Who is using resource 1:7548:0?", they all think I'm from the planet Zog (and a knob!) and don't answer!
If I could shout "Who is amending customer ID C001, Named John Smith?", I might get an answer!
I posted the question how to turn rsc_text into the key value on May 20 - Zero replies!
I got some hints from a mate and got as far as using DBCC IND to find out which pages are locked for a table, and then DBCC PAGE to find out the actual data. Going to be a bit of messing about with cursors probably, but looked do-able.
Then got the show-stopper - If there are any row locks, DBCC IND appears to wait until they are cleared before returning a result. Given that the only reason that I want to run DBCC IND is to find out what locks are in place, this appears to be a Catch-22!
Any other ideas would be greatly appreciated!
cb
I'm trying to find out what the column (invariably the primary key) value is on a locked row.
E.G. master.dbo.syslocks.rsc_text will tell me that
July 12, 2009 at 3:36 pm
Hey Paul, that works a treat! Thanks! I can pop the data into a temp table and join it to the rest of my query to get the answer I need!
Clearly I'm not the only person on the planet who wants to know this info afterall!
BUT: (there's always a BUT!)
I do need to feed it the table name and column name(s) of the primary index - I guess I could use a CASE statement containing data for all the common tables that get locked, its the same 20 or so that cause the problem 99% of the time. And if there was a problem with another, I know the table, so could add it into the CASE statement. So its not a show-stopper.
What I get back of course is the lock info of every row, not just the ones that are locked, so I have to join many rows just to find the data for the few that are locked.
So do you know a way to just get the rsc_text equivalents of only those RIDs that are locked at any point in time? Or am I talking rubbish?
Thanks!
Chris
P.S. I know its cheeky asking for more, but if you don't have anything else no problem, Thanks very much for this, nobody else has come up with anything at all, it's a huge step forward! And I've Googled it for literally weeks and got nowhere! Cheers!
July 12, 2009 at 4:03 pm
cb,
Hopefully, this demo script will illustrate the method I was trying to describe before:
As far as I know, there is no really easy of convenient way to do this, though one could make it more automated with some effort. (A nice solution might involve Service Broker and an audit table).
-- You'll probably have one of these ;c)
USE tempdb;
GO
-- Test table
CREATE TABLE dbo.LockTest
(
row_idINTEGER IDENTITY PRIMARY KEY,
dataNVARCHAR(100) NOT NULL
);
GO
-- Generate 10K rows of test data
INSERTdbo.LockTest (data)
SELECTTOP (10000)
REPLICATE(NCHAR(RAND(CHECKSUM(NEWID())) * 26 + 65), RAND(CHECKSUM(NEWID())) * 100 + 1)
FROMmaster.sys.all_columns C1, master.sys.all_columns C2, master.sys.all_columns C3;
GO
-- Create a transaction and lock a row
BEGIN TRANSACTION
SELECT data FROM dbo.LockTest WITH (REPEATABLEREAD) WHERE row_id = 2584;
-- Show a few details about the locks
SELECTresource_database_id, resource_associated_entity_id, resource_description, resource_type, request_mode
FROMsys.dm_tran_locks
WHERErequest_session_id = @@SPID;
-- On my system:
-- resource_database_id = 2
-- resource_type = 'OBJECT', request_mode = 'IS', resource_associated_entity_id = 37575172 (the object id of dbo.LockTest)
-- resource_type = 'PAGE', request_mode = 'IS', resource_description = '1:279' (file id 1, page #279)
-- resource_type = 'KEY', request_mode = 'S', resource_description = '(180045da8146)' -- hash of the key columns for the row
DBCC TRACEON (3604);-- DBCC PAGE output to the console, rather than the server error log
DBCC PAGE (tempdb, 1, 279, 3);
/* EXTRACT FROM THE DBCC PAGE OUTPUT - note the row_id and KeyHashValue*/
--Slot 31 Column 1 Offset 0x4 Length 4 Length (physical) 4
--row_id = 2584
--Slot 31 Column 2 Offset 0xf Length 92 Length (physical) 92
--data = VVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVV
--Slot 31 Offset 0x0 Length 0 Length (physical) 0
--KeyHashValue = (180045da8146)
-- This also locates the locked row
SELECT * FROM dbo.LockTest WITH (INDEX(1)) WHERE %%lockres%% = '(180045da8146)'-- This is the hashed KEY lock from before
-- Finished
ROLLBACK TRANSACTION
DROP TABLE dbo.LockTest;
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
July 12, 2009 at 4:09 pm
I posted that script in response to your first of your two recent posts (I process stuff in email-notification order!), but I think it broadly covers the questions in your second posts too.
Have a play around with the demo, look closely at the transaction-and-locking-related dynamic views, and I think you should be able to come up with something workable. If you get really stuck, post back and I'll try to assist.
Paul
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
July 13, 2009 at 7:23 am
cb (7/12/2009)
Thanx for your suggestions!...I have clients who "bolt-on" stuff to the database, usually via triggers. Sometimes however, they cause queues of locks. Also, sometimes users leave locks open e.g. when editing the customer master table, get distracted, go to lunch/toilet/home etc.
...If I could shout "Who is amending customer ID C001, Named John Smith?", I might get an answer!
If you have a whole bunch of people able to create triggers on the fly or they forget to commit various transactions, there are more serious issues involved beyond the scope of the database. I don't know if you mentioned if this was a production or QA/Dev database. If it is production, you may want to have a word with your manager and get his support on some changes. Ideally, only the DBA's should have the authority to make production changes, to avoid any confusion as well as get their input should there be a design issue (indexing, data archiving, triggers) that could cause problems down the road.
When we have various business teams that need to change data, they put in a formal and documented request for one of the DBA's to change it. This way if there are issues such as primary or foreign key violations, we can let them know, rather than have them tinker with the database.
While not as critical in a QA environment, it would be nice if similar processes could be adopted there, maybe not with so much DBA involvement, but at least with someone authorized to do monitor and perform the changes. In our case, development environments are a free for all, and we typically don't care what goes on there but we provide input when asked. When the database then goes to QA, this is where we get involved, and almost without exception, even in QA, only the DBAs can modify the database.
Gaby________________________________________________________________"In theory, theory and practice are the same. In practice, they are not." - Albert Einstein
July 13, 2009 at 8:17 am
Hi Gaby
Whilst everything you say is true, the environment I have to deal with is:
Software vendor creates a product based on SQL, and sells it to client.
Client wants some simple changes made. Software vendor quotes stupid money and ridiculous timescales to do so, because vendor has to make it work on various OS e.g. Windows, Linux, AIX and various DB e.g. SQL, Oracle etc. Client won't pay for the whole development, vendor can't issue on one platform only.
So client attempts his own mods. And sometimes it goes wrong.
So there is an opportunity for me to sell them a tool to help identify where the problem lies!
Paul White has given me some stuff to try out, I think I can make it work.
Cheers!
cb
July 13, 2009 at 9:06 am
CB, then all I can wish you is the best of luck. 🙂
I know how frustrating it is to deal with vendors who want things done their way. At least see if you can get some sort of controls in place for changes from within your group, that way the chaos you deal with comes from the outside only.
Gaby________________________________________________________________"In theory, theory and practice are the same. In practice, they are not." - Albert Einstein
July 13, 2009 at 2:59 pm
Gaby Abed (7/13/2009)
...a whole bunch of people...create triggers on the fly...forget to commit transactions...more serious issues involved beyond the scope of the database...only the DBA's should have the authority to make production changes...put in a formal and documented request for one of the DBA's...issues such as primary or foreign key violations...someone authorized to do monitor and perform the changes...only the DBAs can modify the database.
Ah. Welcome to the Real WorldTM Gaby :laugh:
I've worked in a similar environment to 'cb' and sympathise. Sometimes a DBA's just gotta do what a DBA's gotta do.
One day I hope to find a job in a perfect environment...:doze:
Paul
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
July 14, 2009 at 5:19 am
cb (7/13/2009)
Hi GabyWhilst everything you say is true, the environment I have to deal with is:
Software vendor creates a product based on SQL, and sells it to client.
Client wants some simple changes made. Software vendor quotes stupid money and ridiculous timescales to do so, because vendor has to make it work on various OS e.g. Windows, Linux, AIX and various DB e.g. SQL, Oracle etc. Client won't pay for the whole development, vendor can't issue on one platform only.
So client attempts his own mods. And sometimes it goes wrong.
So there is an opportunity for me to sell them a tool to help identify where the problem lies!
Paul White has given me some stuff to try out, I think I can make it work.
Heh... ya just gotta love the myth of "portability". 😛
Not that it matters but I like your attitude. With occasional hints from good folks like Paul, I've no doubt your be successful. Go for it!
--Jeff Moden
Change is inevitable... Change for the better is not.
July 14, 2009 at 7:24 am
Paul White (7/13/2009)
Gaby Abed (7/13/2009)
...a whole bunch of people...create triggers on the fly...forget to commit transactions...more serious issues involved beyond the scope of the database...only the DBA's should have the authority to make production changes...put in a formal and documented request for one of the DBA's...issues such as primary or foreign key violations...someone authorized to do monitor and perform the changes...only the DBAs can modify the database.Ah. Welcome to the Real WorldTM Gaby :laugh:
I've worked in a similar environment to 'cb' and sympathise. Sometimes a DBA's just gotta do what a DBA's gotta do.
One day I hope to find a job in a perfect environment...:doze:
Paul
Yeah...I know. 🙁 No such Shangri-La exists, even here we're struggling to maintain a facade of control, but eventually the developers (vendors or internal) get their way.
Gaby________________________________________________________________"In theory, theory and practice are the same. In practice, they are not." - Albert Einstein
Viewing 15 posts - 1 through 14 (of 14 total)
You must be logged in to reply to this topic. Login to reply