September 18, 2013 at 7:36 am
Our DBA has some type of process setup to monitor deadlocks and sends out an emails when a deadlocks occurs. We have been testing read committed snapshot isolation on our application for a couple months because the performance problems we have been having are definitely related to locking. Everything went very smoothly on our dev & test servers but when we went to apply it to our production database this morning we got about 1300 deadlock graph emails very quickly. Here is the contents of one of the emails:
<EVENT_INSTANCE>
<EventType>DEADLOCK_GRAPH</EventType>
<PostTime>2013-09-18T06:59:19.993</PostTime>
<SPID>19</SPID>
<TextData>
<deadlock-list>
<deadlock victim="process6a1288">
<process-list>
<process id="process75e748" taskpriority="0" logused="10000" waitresource="DATABASE: 8 " waittime="200" schedulerid="4" kpid="1984" status="background" spid="20" sbid="0" ecid="0" priority="0" trancount="0"> <executionStack/> <inputbuf/> </process> <process id="processae94c8" taskpriority="0" logused="10000" waitresource="DATABASE: 8 " waittime="301" schedulerid="5" kpid="7244" status="background" spid="34" sbid="0" ecid="0" priority="0" trancount="0"> <executionStack/> <inputbuf/> </process> <process id="processb1db88" taskpriority="0" logused="10000" waitresource="DATABASE: 8 " waittime="99" schedulerid="7" kpid="6428" status="background" spid="35" sbid="0" ecid="0" priority="0" trancount="0"> <executionStack/> <inputbuf/> </process> <process id="process6a1288" taskpriority="0" logused="10000" waitresource="DATABASE: 8 " waittime="402" schedulerid="1" kpid="912" status="background" spid="22" sbid="0" ecid="0" priority="0" trancount="0"> <executionStack/> <inputbuf/> </process> </process-list> <resource-list> <databaselock subresource="FULL" dbid="8" dbname="unknown" id="lock12137cf80" mode="U"> <owner-list> <owner id="process6a1288" mode="S"/> </owner-list> <waiter-list> <waiter id="process75e748" mode="X" requestType="wait"/> </waiter-list> </databaselock> <databaselock subresource="FULL" dbid="8" dbname="unknown" id="lock12137cf80" mode="U"> <owner-list> <owner id="process6a1288" mode="S"/> </owner-list> <waiter-list> <waiter id="processae94c8" mode="X" requestType="wait"/> </waiter-list> </databaselock> <databaselock subresource="FULL" dbid="8" dbname="unknown" id="lock12137cf80" mode="U"> <owner-list> <owner id="process6a1288" mode="S"/> </owner-list> <waiter-list> <waiter id="processb1db88" mode="X" requestType="wait"/> </waiter-list> </databaselock> <databaselock subresource="FULL" dbid="8" dbname="unknown" id="lock12137cf80" mode="U"> <owner-list> <owner id="processae94c8" mode="S"/> </owner-list> <waiter-list> <waiter id="process6a1288" mode="X" requestType="wait"/> </waiter-list> </databaselock> </resource-list> </deadlock> </deadlock-list> </TextData> <TransactionID/> <LoginName>sa</LoginName> <StartTime>2013-09-18T06:59:19.993</StartTime>
<ServerName>PROD-SQL01</ServerName>
<LoginSid>AQ==</LoginSid>
<EventSequence>462976693</EventSequence>
<IsSystem>1</IsSystem>
<SessionLoginName/>
</EVENT_INSTANCE>
Anyone have any idea what could be going on here? We applied this early in the morning when we have very little database traffic. FYI: We do use idera software for auditing though.
Viewing 0 posts
You must be logged in to reply to this topic. Login to reply
This website stores cookies on your computer.
These cookies are used to improve your website experience and provide more personalized services to you, both on this website and through other media.
To find out more about the cookies we use, see our Privacy Policy