May 2, 2012 at 11:19 am
This proc has been designed in 2005 but I feel it can be improved. I thought I'd ask for others opinion if I am ccomplishing this efficiently. It works, however, it takes approx 3 days to purge data based off a particular business date. The dB is about 4TB before the purge. Here's the code below, your opinions are appreciated.
exec p_maint_usp_MarsPurgeForQALarge
begin
set nocount on
declare @deletedCount int set @deletedCount = 0
declare @deletedTotal bigint set @deletedTotal = 0
declare @numberofDays smallint
declare @execStr varchar(500)
declare @tableName VARCHAR(50)
declare @columnName VARCHAR(50)
declare @retentionPeriod smallint, @rowcount int
declare @ERROR smallint, @deleteChunkSize int
declare @startdate datetime
declare @prevBusinessDate datetime
declare @isRollup binary
declare @rollupIds varchar(200)
create table #temp (tableName varchar(50),columnName varchar(50),retentionPeriod smallint, deleteChunkSize int,isRollup binary,rollupIds varchar(200))
Set @rowcount = 0
-- use max here in case more than one prevBusinessDate exists
select @prevBusinessDate = max(prevBusinessDate) from businessdate (nolock)
if @prevBusinessDate is null
begin
raiserror ('max(prevBusinessDate) returned NULL from businessdate proc:p_maint_MarsPurgeWeekly.sql', 16, 1)
return 1
end
if datediff(day,getdate(), @prevBusinessDate) >= 0
begin
raiserror ('You are attempting to delete data that is not old enough [max(prevBusinessDate) returned with today]', 16, 1)
return 1
end
insert into #temp select tableName,columnName,retentionPeriod, deleteChunkSize,isRollup,rollupIds
from MarsDBPurgeConfig (nolock) where isActive=1 and tableName<>'mxNodes'
union
select tableName,columnName,retentionPeriod, deleteChunkSize,isRollup,rollupIds
from MarsDBPurgeConfig (nolock) where isActive=1 and tableName='mxNodes'
delete from #temp where tableName ='PLConfig'
print 'Starting purge process: ' + convert(varchar,getDate(), 101)
-- update the history table
declare @GUID uniqueidentifier
declare @begin datetime set @begin = getdate()
declare @end datetime
Select '>>Before Purge'
select count(*) from mxnodes (nolock)
select count(*) from mxvar (nolock)
select count(*) from mxgreeks (nolock)
select count(*) from mxstress (nolock)
select min(businessDate) from mxnodes (nolock)
select min(businessDate) from mxvar (nolock)
select min(businessDate) from mxgreeks (nolock)
select min(businessDate) from mxstress (nolock)
While Exists (Select * from #temp)
-- begin mainloop
begin
select * from #temp
set rowcount 1
select @tableName=tableName, @columnName=columnName,@retentionPeriod=retentionPeriod, @deleteChunkSize=deleteChunkSize,@isRollup=isRollup,@rollupIds=rollupIds
from #temp
select @tableName,@columnName,@retentionPeriod,@deleteChunkSize,@isRollup,@rollupIds
set rowcount 0
set @numberOfDays = @retentionPeriod
set @startdate = @prevBusinessDate - 1
-- Select @tableName + ' and ' + @columnName + ' from ' + convert(varchar,@startdate,101)
exec p_updt_DeleteHistory @GUID output, @tableName, @begin, null, '<', @prevBusinessDate, 0, 0
set rowcount @deleteChunkSize
if @isRollup=1 and @tableName='mxnodes'
set @execStr = 'delete from '+ @tableName + ' where '+ @columnName + '<'+''''+ convert(varchar,@startdate,101)+'''' + ' and rollupid in (' +@rollupIds +')'
else if(@isRollup=1)
set @execStr = 'delete from '+ @tableName + ' where '+ @columnName + '<'+''''+ convert(varchar,@startdate,101)+'''' + ' and nodeid in ( select id from mxnodes where rollupid in (' +@rollupIds +'))'
else
set @execStr = 'delete from '+ @tableName + ' where '+ @columnName + '<'+''''+ convert(varchar,@startdate,101)+''''
Select 'Now executing: ' + @execStr + ' in chunks;next ' + convert(varchar(8), @deleteChunkSize) + ' rows'
Delete_Loop:
----------
begin tran --Will result in either commit or rollback in the IF conditional clause
----------
exec (@execStr)
SELECT @rowcount = @@rowcount, @ERROR = @@error
if @ERROR <> 0
begin
-- Select 'Error Rolling back!!'
/* rollback and truncate the log */
rollback --Rollback the open transaction.
checkpoint
-- set the count to -1 in the history table
exec p_updt_DeleteHistory @GUID, null, null, null, null, 0, -1,@numberOfDays
return 1
end
else
begin
commit
end
if @rowcount < 1
begin
-- all done, update final counts
Set @deletedTotal = @deletedTotal + @rowcount
-- Select 'Table ' + @tableName + ' Done!! at', current_timestamp
set @end = getdate()
exec p_updt_DeleteHistory @GUID, null, null, @end, null, null, 1, @deletedTotal
end
else
begin
set @end = getdate()
-- not done, update the counts, endtime (but not completed flag)
Set @deletedTotal = @deletedTotal + @rowcount
goto Delete_Loop
end
print 'The purged table is :'+ @tableName
delete from #temp where [tableName]=@tableName
-- end Delete_Loop
end
--end while
Select '>>After Purge'
select count(*) from mxnodes (nolock)
select count(*) from mxvar (nolock)
select count(*) from mxgreeks (nolock)
select count(*) from mxstress (nolock)
select min(businessDate) from mxnodes (nolock)
select min(businessDate) from mxvar (nolock)
select min(businessDate) from mxgreeks (nolock)
select min(businessDate) from mxstress (nolock)
drop table #temp
end
Aurora
May 2, 2012 at 11:34 am
Honestly, all I can say for sure is that it looks massively over-complicated for a standard purge process, and that I'd have to work with it in your data to offer an concrete suggestions.
Some things you can look into would be:
How long does the setup stage for this take before it even begins deleting?
How long do the actual deletes take as a percentage of the process? Max, min, average and most common durations.
What's the overhead produced by all the friendly messages, et al, that aren't part of the actual delete process?
There are a number of minor details in it that would normally be targets for performance tuning, such as how it populates the temp table and then immediately deletes rows out of it, instead of having a better Where clause on the initial insert into it. Also, there's no way to tell from this post how that sub-proc works. But those probably get lost in the duration of the actual deletes. If tuning the temp table saves you 10 seconds, but it takes 3 days to run, then tuning the temp table is pointless and a waste of effort.
Depending on the actual delete statements, it's possible that indexes could be tuned to make the Where clauses on the deletes more efficient. That's probably where the bulk of improvement would come from. But that depends on the above three questions being answered first.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
May 2, 2012 at 1:01 pm
You stated the DB was 4 TB before the purge what about after. as GSquared has eluded to if you have a massive amount of data to delete it will take a while. if you experience slow downs while this is running you can use a smaller chunk size and throw in a small wait to let any transactions that are locked out of the table complete between loops for the delete.
For performance Issues see how we like them posted here: How to Post Performance Problems - Gail Shaw[/url]
Need to Split some strings? Jeff Moden's DelimitedSplit8K[/url]
Jeff Moden's Cross tab and Pivots Part 1[/url]
Jeff Moden's Cross tab and Pivots Part 2[/url]
May 2, 2012 at 1:43 pm
Thanks guys, Here is some more info:
Step 1marspurgeforQA runs procedure exec p_maint_usp_MarsPurgeForQA
This step took 12 hrs 50 mins the last time it ran
Step 2rebuild indexes and update statistics
This step took 3 hours
Step 3Shrink ALL of MARS database files down to their smallest size to be able to fit in the older environments
This step took 19.5 hours
This takes the dB down to about 2TB.
Aurora
May 2, 2012 at 1:43 pm
capn.hector (5/2/2012)
You stated the DB was 4 TB before the purge what about after. as GSquared has eluded to if you have a massive amount of data to delete it will take a while. if you experience slow downs while this is running you can use a smaller chunk size and throw in a small wait to let any transactions that are locked out of the table complete between loops for the delete.
Actually, too small a chunk size might be the problem here. It calls for checkpoints and does a bunch of other overhead for each chunk. If the chunks are too small, the overhead might be more than the delete time, which means larger deletes might be needed.
More likely though, the deletes are being done based on clustered index scans or something comparably inefficient. A Delete with a Where has to find the rows to delete first, before it can begin to do anything with them, and inappropriate indexing can be just as much of a performance killer on them as any other DML operation.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
May 2, 2012 at 3:36 pm
MegaDBA (5/2/2012)
Thanks guys, Here is some more info:Step 1marspurgeforQA runs procedure exec p_maint_usp_MarsPurgeForQA
This step took 12 hrs 50 mins the last time it ran
Step 2rebuild indexes and update statistics
This step took 3 hours
Step 3Shrink ALL of MARS database files down to their smallest size to be able to fit in the older environments
This step took 19.5 hours
This takes the dB down to about 2TB.
And then step 3 most likely fragmented all the indexes you rebuilt in step 2.
May 2, 2012 at 3:45 pm
That is exactly what I was thinking Lynn. I am reviewing the process and for my developers I always ask for other opinions so they don't think I shoot down everything they do. To me the proc is overly complicated for what its doing...
Is there a better way to get recent data into the QA dB which is size restricted? How would you go about doing this. Currently, we restore backup and then run this crazy process.
Christine
Aurora
May 3, 2012 at 7:26 am
If you reverse the sequence of shrinking it and then rebuilding the indexes, it won't speed it up, but it will at least leave you with intact indexes. Right now, all the time spent rebuilding indexes before shrinking files is completely wasted.
That would be the minimum thing to do.
Beyond that, I'd need more familiarity with what's being done with the database, and what's in it, to offer much. Personally, I'd be inclined towards building a more direct, less dynamic process, that leaves more on the shoulders of SQL Server's automatic handling of things like checkpoints and forces the issue less. Easier to maintain, debug, and refactor that way.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply