May 30, 2013 at 5:08 am
Hi,
I m try to delete some old data to speed up select statement but it takes too much time to delete.
delete from Table_name
where CAST( ReportTime as date) < CAST((GETDATE()-30) as date)
table have 1,00,000 records to delete.
it takes more than 15 minutes
Thanks in advance
May 30, 2013 at 5:19 am
Is it one off task?
There are few ways to speed up deletes. You can choose one depending on your exact circumstances.
1. Copy records you want to leave into new table. Drop existing table. Rename new table into original one.
2. Copy records you want to leave into new table. Truncate existing table. Copy records back from new one.
3. Batch deletes. Here is idea: http://markmintoff.com/2012/03/t-sql-batch-delete/. Just note, you don't really need transaction there....
May 30, 2013 at 5:21 am
suneet.mlvy (5/30/2013)
Hi,I m try to delete some old data to speed up select statement but it takes too much time to delete.
delete from Table_name
where CAST( ReportTime as date) < CAST((GETDATE()-30) as date)
table have 1,00,000 records to delete.
it takes more than 15 minutes
Thanks in advance
Everything Eugene said is accurate. I also think you need to look again at your WHERE clause. Your DELETE is causing a TABLE SCAN, I'm going to assume that the table is quite large (to delete 1 million records) and a table scan due to the CAST(ReportTime AS DATE) will drastically slow it down as each row in the table needs to be checked.
May 30, 2013 at 5:26 am
I see you're casting ReportTime for every row in the table. You're also casting GetDate() and doing math with it. You end up with deleting 1M rows, so it's safe to say you have more than 1M rows in the table. What data type is ReportTime? If you're storing a date in the field, then it really should be a date data type.
I don't know what your table looks like or what indexes you have on it, but if you stored ReportTime as a date, you wouldn't have to do > 1M casts and you'll have a SARGable predicate where you can use an index seek if you index the field. Please be sure to consider the table usage when creating indexes; you don't want maintenance of the index to outweigh its benefit.
May 30, 2013 at 5:27 am
Thanks for reply,
Actually its most frequently used table and mostly inserts are happend.
we got 1,00,000 data on daily basis it alway contain at least 50,00,000 records.
Is there any way to speedup delete statement.
May 30, 2013 at 5:31 am
Is the delete statement going to become part of a daily maintenance job on the table or is this a one-time task?
If you're creating a daily maintenance, what indexes are on the table now?
Also, what does the rest of your table look like?
May 30, 2013 at 5:34 am
Ed Wagner,
Thanks for reply.
Data type of reporttime is datetime and i already use nonclustered index on it. I am using cast to convert it on date from datetime.
May 30, 2013 at 5:38 am
Ed Wagner (5/30/2013)
Is the delete statement going to become part of a daily maintenance job on the table or is this a one-time task?If you're creating a daily maintenance, what indexes are on the table now?
Also, what does the rest of your table look like?
Actually we do it on monthly basis by transfer data into another table then rename it.
but we want to make it daily basis job so that we can schedule it.
There are 9 more column no one contain large data the only contain time, name, file name etc.
May 30, 2013 at 5:42 am
I don't know if the time part of the datetime really matters to you, but you can adjust your minimum date to keep if necessary to account for the 23:59:59 if it does.
I would approach it with something like this:
DECLARE @dtmMin Datetime;
SET @dtmMin = DATEADD(day, -30, GetDate());
DELETE FROM Table_name
WHERE ReportTime < @dtmMin;
You'll completely avoid having to CAST every single row and end up with a SARGable predicate so you can use your index.
See if this approach has the desired effect. You can substitute a SELECT for the DELETE to see if you get the same number of rows you expect and then you can test the speed on a test table.
May 30, 2013 at 7:08 am
Date based cleanup like this are usually very good candidate for partitioning (if you have an enterprise edition), you would have a one time downtime to copy the data to the partitioned table but all your future cleanup will be nearly instant.
May 31, 2013 at 8:54 am
The page space used by deleted rows isn't even freed up until the clustered index is rebuilt, so it's still the same size of data you're scanning. If your table is properly indexed and the select statements are written using sargable expressions in the WHERE clause, then I doubt if deleting rows from the table will improve the performance of your queries. The advice given above about how not to wrap functions around your date columns should be applied to your select queries as well.
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
May 31, 2013 at 9:08 am
Post the actual execution plan for the query (save it as a .sqlplan and attach it to a post on this thread).
Also, need the DDL for the table including the indexes defined.
As for the CASTing of the datetime columns to date, these are still SARGable in this case and will still make use of indexes if they will help the query.
Viewing 12 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply