Delete statement taking a LONG TIME

  • I have a table with about 6 million records. I needed to delete several hundred thousand records from the table so I didn't a simple delete statement with a where clause.

    My problem is that it took a LONG LONG time, were talking about an hour. The database that this table is located in is set to Simple mode so I figured there would not be any logging.

    Can someone tell me how to make this run faster?

    Is this normal for it to take this long?

    If the database is set to Simple does the delete statement still log?

    Any answers would be greatly appreciated.

    Thanks

  • Can't answer about the times, some depends on processor(s), how busy the server is, etc.

    Even in simple recovery everything is still logged. The idea is simple is basically transaction level logging, if your delete abended for some reason it would all be put back. Once it completes though there is basically no record of it.

    With simple you can only recover to the last backup, you have no logs to apply. So if you backup at 8 am, and your server explodes at 5, you have no way to recover except back to 8 am.

    KlK, MCSE


    KlK

  • Your log may also increase in size since you're doing it in a single transaction. For something that lot, better to run in chunks that complete in a reasonable amount of time. I think Steve J has an article posted about the chunking part. In addition to hardware, load, it's also the number of indexes that have to be updated.

    Andy

    http://www.sqlservercentral.com/columnists/awarren/

  • Andy, are you saying that indexes are updated during this process? I guess what I am asking is if I have a clusted index is the index moving data around on the pages while the delete is occurring or do I need to reindex in order to straighten the index out.

    Also, as far as the chunking goes, my where clause on consists of:

    where colun_name = 'I'

    Its pretty basic and I thought it would have run pretty darn quick but it didn't.

    Any other suggestions would be great.

    Thanks

  • Agree with kknudson..."Even in simple recovery everything is still logged"

    Check your log file usage while this transaction is going...

    Check sp_who and see waht is the waitresource coluymn saying

    Check if disk i/o limitation is not the reason for this delay

    Check if the where clause is having a clustered or non clustered index

    while looking at the index check the column order of the index is based on density of data

    Check dbcc showcontig for this table

    Hope this should solve the problem..

    Prakash

    Prakash Heda
    Lead DBA Team - www.sqlfeatures.com
    Video sessions on Performance Tuning and SQL 2012 HA

  • If deleting a large chunk of data from a table every index will play a factor as each row disappears the indexes are updated. Many time it will help performance to drop all indexes. But do not drop the index that will support your where condition as it will then cause a Table Scan which is worse. Also, normally try to keep your clustered index if it will be deleting from the front of the index and not speradiocally, but if you think you need to drop it drop all non-clustered indexes you are not keeping first. The reason is any change to the clustered index will cause the non-clustered to rebuild slowing you down.

    Be aware that any transactions that could potentially lock the row you are deleting or the page it is on for breif periods can slow you down as well since your process may have a wait.

    Also, even in simple recover like as has been stated all transactions a logged in case a rollback needs to occurr. The primary issue here is TL size and growth rate. If the log is small and needs to grow you may be hurting performance by the fact the growth rate may not be large enough. What this means is if the TL must grow for example is 20MB and the rate of gorwth is 10% (the default) and will ultimately be 2GB then it will fill and grow, first by 2MB, then 2.2MB and progressively larger. However, each time the gorwth occurrs it will cause the delete to pause while it recalcuates and grows. If you increase the growth factor this will mean less overall growths and a faster return. When deleting large chunks I do the following.

    Do a SELECT COUNT(*) FROM tbl WHERE col = condition for delete

    Add up the bytes of a row of data byt using the values in the design of the table with EM (fastest way offhand).

    Multiply the last value by 1.25 (there are some extra overheads in the TL markup) then by the number returned from COUNT(*).

    That give me a good potential value for the log size to reach and then I divide by 10 and change the log growth value in MB to equal that (at least during the delete).

    Another item that may help is to turn of AUTO UPDATE STATS and AUTO SHRINK from the database options during the processing so neither occurr potentially slowing you down as well.

    As pointed out by Andy if you can find another condition or two to reduce the number of deletes that occurr at one time and do it in steps to delete in groups or chunks it may help as well since the TL will record then empty and not as much movement, growth and so on will occurr.

    Finally, when doing large deletes pick a time when overall transactions are low so that the delete is not competing with other things as much.

  • The best practice is to dump the data which is needed into a staging table and then rename it to the original name.This is too fast than deleting from big tables with indexes.

  • In addition to everything said so far (use index!), check if you have foreign keys pointing to this table from other tables or if there are DELETE triggers on the table. Those would slow down your delete statement.

    Edited by - mromm on 01/23/2003 1:32:39 PM

  • Another good reason for braking the task into chunks is in case of rollbacks. If you perform the delete in one big chunk and are unlucky enoough to have to roll back the entire transaction you could get yourself into a lot of trouble. I know of a DBA who did this. It was slightly more complicated in that they were deleting from a number of tables but to cut the story short the deletion took so long they had to try to cancel it as they were running out of time and this was on a LIVE system! Unfortunately for those concerned things dragged on for hours!

    Nigel Moore
    ======================

  • We just solved a similar problem here -- a sproc was taking > 1 minute to delete less than a couple dozen rows from a table with about 24k rows total. Ended up being foreign key constraint checking from the original table back to an auxiliary table with over 180k rows. Each row in this aux table had 2 foreign keys back to the "deleted" table.

    We reworked the logic to do the delete during off-hours. Due to the nature of the table, none of the deleted rows would have had any foreign key relationships back to the aux table, so we just scheduled a sproc to disable the contraints, delete the rows, then enable the constraints again.

    Our in-house development methodology does not actively delete records, we set a deleted flag on most records, so this was a surprise.

    I never thought that constraint checking could be so costly.

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply