March 14, 2011 at 11:56 am
We have a database that is insert only with avg of 310000 transactions per hour. The table these transactions are written to has only one index on the primary key(identity).
The data is pulled onto a secondary server hourly that is indexed for reporting. We only backup the primary as our disaster recovery plan requires a month's worth of data so we run transactional backups on the primary server because the backup is smaller and we could rebuild the reporting from it.
The problem I am running into is that due to increased activity I am seeing blocking when we purge data greater than a month. I set the database to simple recovery and use a temp table that contains all the id of the transactions that are greater than a month old. I then delete the transactions in batches of 100. I used to be able to do them in batches of 1000 without having our message queues back up. I cannot find any performance issues such as cluster, san, cpu...
I guess my question is, why is the delete blocking the insert? Is there a hint I should be using? Isn't SQL uppose to be using row level locking? I used OPTION (MAXDOP 1) to use only one thread which has reduced the blocking and slowed down the job of course.
showcontig on the clustered index shows
- Logical Scan Fragmentation ..................: 0.40%
- Extent Scan Fragmentation ...................: 4.03%
March 14, 2011 at 12:18 pm
At a guess, your delete is properly using only row locking, but your insert is trying to get its hands on an escalated table lock. Look at the insert statement's expected row counts.
Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.
For better assistance in answering your questions[/url] | Forum Netiquette
For index/tuning help, follow these directions.[/url] |Tally Tables[/url]
Twitter: @AnyWayDBA
March 14, 2011 at 12:32 pm
If you would have posted both execution plans, we would get a better picture of what might be going on.
My guess, your select for id based on a (non indexed) datetime column does a full scan of your table, so may cause (b)locking.
You could as well create a non-clustering index on the datetime column and only activate it right before your delete sequence.
Disable this index after your delete run so your system doesn't have any issues with it for the rest of the month. (don't drop it so every other dba can see it and knows it purpose)
Make sure you have an index on the id column in your temp table, so you can use a delete join ( an have no need for a cursor at all )
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
March 14, 2011 at 1:19 pm
Here is the execution plan for insert
StmtText
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
INSERT INTO [TrackRawWeb].[dbo].[EventTransactions]
([DomainId]
,[ApplicationId]
,[EventId]
,[InsertDate]
,[MsgId]
,[MsgDate]
,[BrowserName]
,[BrowserOS]
(1 row(s) affected)
StmtText
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|--Assert(WHERE:(CASE WHEN [Expr1044] IS NULL THEN (0) ELSE CASE WHEN [Expr1045] IS NULL THEN (1) ELSE CASE WHEN [Expr1046] IS NULL THEN (2) ELSE NULL END END END))
|--Nested Loops(Left Semi Join, OUTER REFERENCES:([TrackRawWeb].[dbo].[EventTransactions].[DomainId]), DEFINE:([Expr1046] = [PROBE VALUE]))
|--Nested Loops(Left Semi Join, OUTER REFERENCES:([TrackRawWeb].[dbo].[EventTransactions].[EventId]), DEFINE:([Expr1045] = [PROBE VALUE]))
| |--Nested Loops(Left Semi Join, OUTER REFERENCES:([TrackRawWeb].[dbo].[EventTransactions].[ApplicationId]), DEFINE:([Expr1044] = [PROBE VALUE]))
| | |--Clustered Index Insert(OBJECT:([TrackRawWeb].[dbo].[EventTransactions].[PK_EventTransactions1]), OBJECT:([TrackRawWeb].[dbo].[EventTransactions].[IDX_CollectionEvent]), SET:([TrackRawWeb].[dbo].[EventTransactions].[DomainId] = Rais
| | | |--Compute Scalar(DEFINE:([Expr1004]=CONVERT_IMPLICIT(datetime,[@4],0), [Expr1005]=newid(), [Expr1006]=CONVERT_IMPLICIT(datetime,[@5],0), [Expr1007]=CONVERT_IMPLICIT(varchar(50),[@6],0), [Expr1008]=CONVERT_IMPLICIT(varchar(50),[@
| | | |--Compute Scalar(DEFINE:([Expr1003]=getidentity((1413580074),(8),NULL)))
| | | |--Constant Scan
| | |--Clustered Index Seek(OBJECT:([TrackRawWeb].[dbo].[ApplicationSite].[ApplicationSite.PK_ApplicationID]), SEEK:([TrackRawWeb].[dbo].[ApplicationSite].[ApplicationID]=[TrackRawWeb].[dbo].[EventTransactions].[ApplicationId]) ORDERED FO
| |--Clustered Index Seek(OBJECT:([TrackRawWeb].[dbo].[CollectionEvent].[CollectionEvent.PK_EventID]), SEEK:([TrackRawWeb].[dbo].[CollectionEvent].[EventID]=[TrackRawWeb].[dbo].[EventTransactions].[EventId]) ORDERED FORWARD)
|--Clustered Index Seek(OBJECT:([TrackRawWeb].[dbo].[Domain].[Domain.PK_DomainID]), SEEK:([TrackRawWeb].[dbo].[Domain].[DomainID]=[TrackRawWeb].[dbo].[EventTransactions].[DomainId]) ORDERED FORWARD)
AND the delete
|--Clustered Index Delete(OBJECT:([TrackRawWeb].[dbo].[EventTransactions].[PK_EventTransactions1]), OBJECT:([TrackRawWeb].[dbo].[EventTransactions].[IDX_CollectionEvent]), WHERE:([TrackRawWeb].[dbo].[EventTransactions].[TransactionId] >= (1) AND [TrackRawWeb].[dbo].[EventTransactions].[TransactionId] <= (2)))
March 14, 2011 at 3:05 pm
First item of business... head into tools-options in your SSMS, and go to query results node, then SQL Server, then change the following:
Results to Grid: Non XML Data: 65535
Results to Text: Maximum Number of Characters displayed in each column: 8192
Your plan above got truncated on longer lines.
Next item, any chance to get the xml/.sqlplans? I'm curious about estimated/actual rowcounts. Trying to determine why it's escalating locks to crossing over each other, that's usually a good place to start.
Also, how are you determining blocking? Simply polling sysprocesses, third party tool, or does it just seem to take a while? What's the block/wait_type in particular that you're getting?
At a brief glance your indexing looks fine from the events listed in the plans that you posted.
Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.
For better assistance in answering your questions[/url] | Forum Netiquette
For index/tuning help, follow these directions.[/url] |Tally Tables[/url]
Twitter: @AnyWayDBA
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply