March 10, 2008 at 9:59 am
THANKS! ok just so i understand... the loop query what it does is delete the rows by 100 rows each time (small batches) until there are 0 rows correct then the loop will stop.
i ran it , i got the 100 rows affected line several times instead of only one time.
I am planning to put that query in job which will run every 5 minutes.
if that is the case, then should i just use this query for the first time where i am planning, it would be 2 million rows deleted out of 20 million? I was planning to do the one where i disable the nonclustered index but i just noticed in my table since it is a history table i don't have any, or should i used use delete from table where....script??
March 10, 2008 at 10:28 am
One single run of the loop will "wipe out" all rows created after 1/5/08. If that's the two million, then yes - one single run will handle them all. I certainly wouldn't schedule this every 5 minutes.
Also - for what it's worth - I'd think that 100 rows at a time is a tad bit too low. that will actually slow things down a bit. There's a sweet spot somewhere between 2M (too big) and 100 (too small). I'd start around 5 or 10 thousand instead.
----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
March 10, 2008 at 10:39 am
So now that the problem of deleting the rows is solved, I must ask the question about why you would like to schedule to run this every five minutes. Isn't there a minimum amount of time the rows should stay in the table? Say at 4.99 seconds after the last delete you insert a row into this table you delete from, and almost immediately this row will get deleted. If the data is so temporary and possibly is only required by a single transaction, why store it in one huge table? You seem to be selecting the rows to delete based on some creation date. So can you get new rows into this table that do not match your delete filter? (sorry, I'm just too curious :))
Regards,
Andras
March 10, 2008 at 10:58 am
this table is coming from a tables which is used to insert rows every second. i have already a job in which rows get inserted from table1 where the date created is older than a week. then in this history table (tblMessages) since its a history table i only need the data for this month so i am changing that row to
DATEDIFF(DAY,TimeCreated,GETDATE())> 30
but you are right there is no point on running this every 5 minutes, i am going to try to convince the DBA to run this every night.
March 10, 2008 at 11:03 am
thank you all!!! i apreciate it!!! 😀
March 10, 2008 at 11:07 am
DBA (3/10/2008)
this table is coming from a tables which is used to insert rows every second. i have already a job in which rows get inserted from table1 where the date created is older than a week. then in this history table (tblMessages) since its a history table i only need the data for this month so i am changing that row toDATEDIFF(DAY,TimeCreated,GETDATE())> 30
but you are right there is no point on running this every 5 minutes, i am going to try to convince the DBA to run this every night.
This way of course it makes much more sense. Do note that a nightly run is the smallest granularity the above makes sense (datediff(day) first chops off the time part, and does a diff on the day only :), if you want to run the query more frequently, you should do datediffs on hours/minutes, etc)
Regards,
Andras
March 11, 2008 at 1:02 am
well ... a function (datediff) can be avoided just by adding
one variable, calculating that and then have a straight comparisson
where clause.
This way chances of index usage will be better if you have an index
for datecreated.
declare @myrefdate datetime
set @myrefdate = dateadd(dd,-31,getdate())
SET ROWCOUNT 10000
While 1=1
Begin
Delete
from tblMessages
where datecreated>= @myrefdate
if @@rowcount = 0 BREAK
end
SET ROWCOUNT 0
)
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
March 11, 2008 at 6:35 am
thank you!
November 17, 2008 at 11:10 pm
Deleting Bulk rows from Table about 2 Millions or more can be done with simple Delete Statement by giving
Delete from //It is Not feasible solution
Instead that prerequisites should be considered.
#1. Set the Log file to be turned off before deleting the bulk rows. The reason is the delete operation related logs should NOT be stored in log files.
#2. Remove Triggers if any on the table. The reason is It may fire an event during the deletion process.
#3. Check out for any indexes (Particularly NON clustered Index) and referential domain status before deletion.
Finally, the table is now ready to delete with Bulk amount of rows by giving [Delete] statement.
November 17, 2008 at 11:26 pm
Assumptions:
1) Using SQL Server 2005 (this is a SQL Server 2005 forum)
2) Database is using the Full Rocovery Model
3) All records to be deleted are based on the following original delete statement posted by the OP
delete *
from tablename
where datecreated>= '1/5/08'
declare @deletecount int;
set @deletecount = 1;
while exists(select 1 from dbo.tablename where datecreated > '20080105') -- assumes 1/5/08 is mm/dd/yy
begin
delete top (10000)
from
dbo.tablename
where
datecreated > '20080105'
backup log [MyDatebase] -- name of database where tablename exists
to disk = 'd:\backups\MyDatabase_' + cast(@deletecount as varchar) + '.trn';
set @deletecount = @deletecount + 1;
end
November 17, 2008 at 11:29 pm
Lynn Pettis (11/17/2008)
Assumptions:1) Using SQL Server 2005 (this is a SQL Server 2005 forum)
2) Database is using the Full Rocovery Model
3) All records to be deleted are based on the following original delete statement posted by the OP
delete *
from tablename
where datecreated>= '1/5/08'
declare @deletecount int;
set @deletecount = 1;
while exists(select 1 from dbo.tablename where datecreated > '20080105') -- assumes 1/5/08 is mm/dd/yy
begin
delete top (10000)
from
dbo.tablename
where
datecreated > '20080105'
backup log [MyDatebase] -- name of database where tablename exists
to disk = 'd:\backups\MyDatabase_' + cast(@deletecount as varchar) + '.trn';
set @deletecount = @deletecount + 1;
end
Had the opportunity to look through the rest of the thread, saw that this is actually on SQL Server 2000. Here is my change:
set rowcount 10000
declare @deletecount int;
set @deletecount = 1;
while exists(select 1 from dbo.tablename where datecreated > '20080105') -- assumes 1/5/08 is mm/dd/yy
begin
delete
from
dbo.tablename
where
datecreated > '20080105'
backup log [MyDatebase] -- name of database where tablename exists
to disk = 'd:\backups\MyDatabase_' + cast(@deletecount as varchar) + '.trn';
set @deletecount = @deletecount + 1;
end
set rowcount 0
Viewing 11 posts - 16 through 25 (of 25 total)
You must be logged in to reply to this topic. Login to reply