April 27, 2011 at 1:51 pm
I have a table that has over 300 million rows. I would like to delete 2010 data. However i don't want to delete in go. Rather I would like to delete by day. So I started writing a simple while statement that will print out delete statements by day but I am stuck.
Here is what I have so far:
declare @startdate Datetime
declare @enddate Datetime
set @startdate = getdate()-480
set @enddate = getdate()-90
set @vStart = select convert(varchar,@startdate, 102)
print @startdate
print @enddate
WHILE (@startdate < @enddate)
BEGIN
print 'delete from vending where DetectedDate <' + @startdate
set @startdate = @startdate+1
END
But not having much luck. Wondering if someone can give me some input here.
Thanks
April 27, 2011 at 1:57 pm
The variable @vStart is not declared
Dan
If only I could snap my figures and have all the correct indexes apear and the buffer clean and.... Start day dream here.
April 27, 2011 at 1:58 pm
This will fix what you have:
declare @startdate Datetime
declare @enddate Datetime
set @startdate = getdate()-480
set @enddate = getdate()-90
--set @vStart = select convert(varchar,@startdate, 102)
print @startdate
print @enddate
WHILE (@startdate < @enddate)
BEGIN
print 'delete from vending where DetectedDate <''' + CONVERT(varchar(10), @startdate, 101) +''''
set @startdate = @startdate+1
END
______________________________________________________________________________
How I want a drink, alcoholic of course, after the heavy lectures involving quantum mechanics.
April 27, 2011 at 2:13 pm
Thanks much. Greatly appreciate it.
April 27, 2011 at 2:21 pm
Sure thing. But the question lingers: why generate the delete statements like this? Why not just delete for the year? I'm very curious to your reasoning. :unsure:
______________________________________________________________________________
How I want a drink, alcoholic of course, after the heavy lectures involving quantum mechanics.
April 28, 2011 at 6:21 am
Yes one could delete a big lump at a time, both methods are valid.
Sometimes, simply due to size of data (original post mentions 300m rows) it is more sympathetic to a system to delete in smaller batches.
---------------------------------------------
If data can screw you, it will; never assume the data are correct.
April 28, 2011 at 6:41 am
Ya and assuming you only have 16 months of data in there, that means you're likely flushing out 150 to 250 million rows.
At that point I'd consider partitioning and simply droping outdated partitions rather than full deletes.
Also if you're really dropping that much data, it might be worth it to either scale down the bactches even more (maybe 1 hour at a time) or simply saving a good rows to a new table and deleting the old table...
I've also heard that bcp out / truncate / bcp in can do it really fast. Tho I've never done that one.
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply