February 6, 2014 at 2:16 am
Hi guys,
This morning a guy was executing a stored procedure which loads data into a temp table then manipulates it (3rd party app procedure). It caused a lot of other sessions to be blocked with LCK_M_IX as he has taken TAB locks in tempdb.
Should this kind of blocking occur on tempdb? I thought this shouldn't be an issue as it is only referring to local temp tables in the stored proc.
The DB is running on 10.5 EE with 4 equally sized temp files (although tiny at 32mb each). Anyone know why this kind of lock may occur?
Thanks for any info.
February 6, 2014 at 2:21 am
What resource were the locks on?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
February 6, 2014 at 2:55 am
Hi Gail,
It was an object in tempdb so my assumption is that it would be that local temporary table created/dropped by the procedure because when I try OBJECT_NAME on the ID now I get null.
My confusion is in that the people being blocked were running simpler queries on normal tables (probably the same tables used to populate the data for the temp table); which is again strange as readers shouldn't block readers. But again I'm not sure.
Seems I didn't collect enough info before the transaction was cancelled 🙁
February 6, 2014 at 3:02 am
Dird (2/6/2014)
It was an object in tempdb so my assumption is that it would be that local temporary table created/dropped by the procedure because when I try OBJECT_NAME on the ID now I get null.
Maybe, maybe not. There's a lot of other things in TempDB than temp tables. (spills, worktables, etc)
My confusion is in that the people being blocked were running simpler queries on normal tables (probably the same tables used to populate the data for the temp table); which is again strange as readers shouldn't block readers.
Readers don't block readers (or should I say shared locks don't block shared locks). Would need more info on the types of locks (requested and granted) and the resources the locks were on, as well as other wait types involved to figure out what was going on
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
February 20, 2014 at 8:20 am
Hi Gail,
I cleared the stats yesterday & let them run to get a more accurate picture of the recent state. Using this Paul Randal query[/url] I get the following results:
WaitTypeWait_SResource_SSignal_SWaitCountPercentageAvgWait_SAvgRes_SAvgSig_S
CXPACKET3436133.533424886.2511247.285917936963.750.05810.05790.0002
PAGEIOLATCH_SH1665507.721663933.491574.231893224230.900.08800.08790.0001
PAGEIOLATCH_EX69186.3569168.4317.9213300641.280.05200.05200.0000
Does this flag up any alarm bells at all? The averages seem low to me (0%). I'm pretty sure most of the issue is simply just multi-table queries on columns without indexes since the following queries almost never show any blocking:
SELECT session_id, blocking_session_id
FROM sys.dm_exec_requests
WHERE blocking_session_id <> 0;
SELECT session_id, wait_duration_ms, wait_type, blocking_session_id
FROM sys.dm_os_waiting_tasks
WHERE blocking_session_id <> session_id
I'm leaving DETA to run over night on a trace file I captured last week (thanks to a script from 1 of your blog posts :)); hopefully it will flag up a few which will provide big gains. Will you be blogging again soon? If so hopefully you'll do some that include step-by-step performance issue debugging 🙂
February 20, 2014 at 8:41 am
Dird (2/20/2014)
If so hopefully you'll do some that include step-by-step performance issue debugging 🙂
You mean this?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
February 20, 2014 at 9:08 am
Indeed :), that's the one I read before but it seems I need to increase the TextData column:
Msg 8152, Level 16, State 10, Line 1
String or binary data would be truncated.
The statement has been terminated.
February 21, 2014 at 2:37 am
Any idea why that trace wouldn't be read by DTA? Is there a size limit that it will support (trace is 3.4gb).
Also, why does RECONFIGURE take over 3 minutes (I cancelled it at 3, now its upto 8 minutes with the canceling) here?:
sp_configure 'cost threshold for parallelism', 15
reconfigure
Edit: I found out why it stops, DTA uses C: to store it's workings...is there any way to make DTA's temp folder be on a different drive?
February 21, 2014 at 9:24 am
Dird (2/21/2014)
Edit: I found out why it stops, DTA uses C: to store it's workings...is there any way to make DTA's temp folder be on a different drive?
Please don't tell me you're running DTA on the production server?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
February 21, 2014 at 10:47 am
GilaMonster (2/21/2014)
Please don't tell me you're running DTA on the production server?
Hehe no, on test. I restored a recent backup to test & moved the trace file to there (where I then tried to run DTA). We were waiting on storage space on D: to restore the data to but now DTA is throwing up space issues on C: 🙁
February 21, 2014 at 12:56 pm
Ditch DTA. If you read the articles, I show how to identify the worst performance problems of what's in the trace, how to dig into the procedures and what you can do about them. Combine that with my indexing articles here and you should be able to do a better job than DTA will.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
February 21, 2014 at 1:05 pm
GilaMonster (2/21/2014)
Ditch DTA. If you read the articles, I show how to identify the worst performance problems of what's in the trace, how to dig into the procedures and what you can do about them. Combine that with my indexing articles here and you should be able to do a better job than DTA will.
I tried that with the worst offender (a stored procedure) but I got an error message trying to generate the estimated execution plans (something along the lines of not enough memory with a reference to "internal").
Well technically I went into the procedure and ran a sub-section of the procedure (can't remember if I just tried calling the procedure out right).
The procedure starts with 12 queries to assign values to variables then uses those in a query used to generate a cursor. The cursor then loops around with a case statement (0-8) with each calling a different stored procedure...it's 3rd party.
Should the memory issue be less of an issue if it's actual execution plans instead of estimated?
Viewing 12 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply