Delete Query

  • Hi All,

    We need to delete some millions of rows from a table.

    And we need this should not affect transactional log file.

    Is there a way to achieve this ?

    Thank You.

    Regards,
    Raghavender Chavva

  • Raghavender (4/26/2011)


    Hi All,

    We need to delete some millions of rows from a table.

    And we need this should not affect transactional log file.

    Is there a way to achieve this ?

    What are you tryig to say it means that you need to apply delete statement on a record of million rows and you don't want to increase in size of the log file?

    Thanks
    Parthi

  • Raghavender (4/26/2011)


    Hi All,

    We need to delete some millions of rows from a table.

    And we need this should not affect transactional log file.

    Is there a way to achieve this ?

    Impossible.

    The best alternative is to delete is smaller chunks and run tlog backups within the loop.

    Bulk logged can help but it's still logged!

  • Ninja's_RGR'us (4/26/2011)


    The best alternative is to delete in smaller chunks and run tlog backups within the loop.

    Yes, that is how I do it, with no problems. It also gives you the option to stop the process & continue later.

  • homebrew01 (4/26/2011)


    Ninja's_RGR'us (4/26/2011)


    The best alternative is to delete in smaller chunks and run tlog backups within the loop.

    Yes, that is how I do it, with no problems. It also gives you the option to stop the process & continue later.

    Agreed,

    If you delete millions of records in one shot then you will face manly 2 problems

    1) Your tempDB increased quickly and if there is size issue on your disk then this is big problem.

    2) You can not stop in the middle and if you stop it in large transaction then some time the database will go in recovery mode and it will take longer time too.

    so the best way is as the Gentleman said

    use in chunk and shrink log in middle.

    Thanks

    Rajat Jaiswal

    http://www.indiandotnet.wordpress.com

    🙂

  • How many rows will be left in the table when you are done?

    If the answer is 0 then TRUNCATE TABLE.

    If the answer is small then create a new table with those records, Truncate the existing table and then copy the records back into the table.

    If the answer is large then delete in batches as suggested by the previous posts.

    --

    JimFive

  • Rajat Jaiswal-337252 (4/27/2011)


    homebrew01 (4/26/2011)


    Ninja's_RGR'us (4/26/2011)


    The best alternative is to delete in smaller chunks and run tlog backups within the loop.

    Yes, that is how I do it, with no problems. It also gives you the option to stop the process & continue later.

    Agreed,

    If you delete millions of records in one shot then you will face manly 2 problems

    1) Your tempDB increased quickly and if there is size issue on your disk then this is big problem.

    2) You can not stop in the middle and if you stop it in large transaction then some time the database will go in recovery mode and it will take longer time too.

    so the best way is as the Gentleman said

    use in chunk and shrink log in middle.

    Thanks

    Rajat Jaiswal

    http://www.indiandotnet.wordpress.com

    🙂

    Don't SHRINK the log, back it up every x loops so you keep its size contained.

  • 1)use db_name

    2) set the database recovery to simple - sp_dboption 'db_name', 'trunc. log on chkpt.', true

    3) write a store produre to delete about 10000 per loop. After each loop commit tran and checkpoint before deleting the next 10000 records -- this will flush all the logs created by deleting a batch of 10000 records.

    4) after all desired records are deleted reset the database reocovery model whaever it was before step 2.

  • Himmat Singh (4/28/2011)


    1)use db_name

    2) set the database recovery to simple - sp_dboption 'db_name', 'trunc. log on chkpt.', true

    3) write a store produre to delete about 10000 per loop. After each loop commit tran and checkpoint before deleting the next 10000 records -- this will flush all the logs created by deleting a batch of 10000 records.

    4) after all desired records are deleted reset the database reocovery model whaever it was before step 2.

    That stops you from doing ANY point in time restore for as long as the loop runs and UNTILL you have changed to full recovery and redone a full backup.

    The full backup b4 and after are really essential if you want to go that route.

  • Ninja's_RGR'us (4/28/2011)


    Himmat Singh (4/28/2011)


    1)use db_name

    2) set the database recovery to simple - sp_dboption 'db_name', 'trunc. log on chkpt.', true

    3) write a store produre to delete about 10000 per loop. After each loop commit tran and checkpoint before deleting the next 10000 records -- this will flush all the logs created by deleting a batch of 10000 records.

    4) after all desired records are deleted reset the database reocovery model whaever it was before step 2.

    That stops you from doing ANY point in time restore for as long as the loop runs and UNTILL you have changed to full recovery and redone a full backup.

    The full backup b4 and after are really essential if you want to go that route.

    Agreed. Switching to SIMPLE is risky with minimal benefit. Better to be safe and stay with FULL recovery and take regular t-log backups.

  • Have TABLOCK hint placed on the table.

    This will be minimally logged.

    http://msdn.microsoft.com/en-us/library/ms190422(v=SQL.90).aspx

  • sgvv (5/3/2011)


    Have TABLOCK hint placed on the table.

    This will be minimally logged.

    http://msdn.microsoft.com/en-us/library/ms190422(v=SQL.90).aspx

    Deletes are never minimally logged, regardless of locks or recovery model.

    Your link talks about bulk import, ie inserting rows, not deleting them.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Thanks Gail, for clearing my misunderstanding.

    Regards

Viewing 13 posts - 1 through 12 (of 12 total)

You must be logged in to reply to this topic. Login to reply