July 7, 2008 at 12:54 pm
Creating a lookup table
SELECT TOP 1000 [uniquerowid]
INTO #TempDeleteLookup
FROM PRTran WITH (NOLOCK)
WHERE TranAmt = 0 and Qty = 0 and UnitPrice = 0 and Type_ = 'DW'
GO
DELETE PRTran
WHERE [uniquerowid] IN (SELECT [uniquerowid] FROM #TempDeleteLookup)
GO
Verify the main table
SELECT COUNT(*)
FROM #TempDeleteLookup
INNER JOIN PRTran ON PrTran.[uniquerowid] = @TempDeleteLookup.[uniquerowid]
GO
Truncate temp table
This process is taking more than 2 days for me to delete all unwanted stuff as it returns 15,000,000 rows to delete. i wud like to make it faster some how. I want to set up some kind of job so that it delets all zero transactions and that wud not lock the table and stop my other stuff.
any suggestions
July 7, 2008 at 1:10 pm
I must be missing something.
Why can't you just do this in a loop:
DELETE TOP 1000
FROM PRTran
WHERE TranAmt = 0 and Qty = 0 and UnitPrice = 0 and Type_ = 'DW'
Why do you need the other 2 statements?
[font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
Proactive Performance Solutions, Inc. [/font][font="Verdana"] "Performance is our middle name."[/font]
July 7, 2008 at 2:07 pm
this topic refers to this
http://www.sqlservercentral.com/Forums/Topic514630-338-1.aspx
July 7, 2008 at 3:10 pm
OK, I checked it out. It still doesn't look right to me. I might use this approach for SQL 2000, but not SQL 2005. I've also seen variants, where you first load up all of the row id's to be deleted, and then process through that in chunks, but I don't think that that is necessary here.
Also, you are not still dropping the indexes are you? Rebuilding them would take forever on a big table.
Try it with my 1 statement and see if that is faster.
[font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
Proactive Performance Solutions, Inc. [/font][font="Verdana"] "Performance is our middle name."[/font]
July 7, 2008 at 3:12 pm
The other thing that you should consider is making an index for the columns on your WHERE clause.
[font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
Proactive Performance Solutions, Inc. [/font][font="Verdana"] "Performance is our middle name."[/font]
July 7, 2008 at 4:41 pm
But still i just wonder why wud it take 2 days to delete 15 million records.
July 7, 2008 at 5:02 pm
I'd have to see the whole procedure to answer that.
[font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
Proactive Performance Solutions, Inc. [/font][font="Verdana"] "Performance is our middle name."[/font]
July 7, 2008 at 5:04 pm
How many rows in the table before the delete?
Are these rows useful? Do they need to be in the table for a while and then get deleted or could you put a trigger on the table to deny their insertion in the first place?
If there is no index on the columns you're filtering on, I'd be willing to bet that the 'select' is taking more time than the 'delete', and the tuning opportunities would involve eliminating it (as a previous poster suggested). With an index covering the query, the deletes of small batches of rows should minimize the locking problems you're trying to avoid.
July 8, 2008 at 6:23 am
Posting the execution plan would help in the evaluation. Not seeing your structure and only seeing a query, we're all limited to making guesses.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply