May 10, 2007 at 9:15 am
I agree that in this case (70% flush) a bulk migration process is most efficent. However, since there is already much discussion in this thread about using a batch delete process (it comes up fairly frequently in my experience), I will offer a few suggestions:
1) Consider putting in a waitfor delay statement after each batch to allow other processes to access the table and/or the checkpoint process to catch up.
2) Consider doing some form of backup of the tlog on a regular basis during the delete loop, either truncate_only if you don't care about the tlog or to a device/file(s) if you want to keep recoverability and you are not in simple mode. Keeps the log from growing/filling up.
3) Definitely rebuild indexes, update stats and recompile affected sprocs post-delete.
4) Depending on what percentage of the table you remove, it can be more efficient to drop all stats/indexes (except the one(s) needed to match the where clause) and rebuild post-delete. Likewise turn off/on autostats update features as well.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
May 11, 2007 at 5:40 am
Two things to add to this one.
The first is that deletes will take a very long time if you have foreign keys or other constraints that need to be checked to make sure your delete does not violate them. Always watch for these.
...and then a tip...
If you go into Enterprise Manager and add a new column at the beginning of the table and then generate the change script (don't click save), you will get a script that will drop all constraints, create a new table, copy your existing data, drop your old table, and rename the new one. If you take this script, remove the extra new field you added to the table from the create table statement, and then just add a where clause to the insert statement getting only the data you still want, you will have a MS built (and well tested) script to do this for you.
...I hate to write something that I can copy...
May 11, 2007 at 10:38 am
The problem with the Enterprise Manager generated script is that there is no error checking. It always drops the table, whether rows were copied or not. I either execute it manually one step at a time, or put in some test that checks the rowcount on the copy step and aborts the script if it fails.
Viewing 3 posts - 16 through 17 (of 17 total)
You must be logged in to reply to this topic. Login to reply