October 14, 2004 at 11:50 am
Anyone know the approxiamate amount of time it takes to delete 10 million rows with like 6 fields in each row that are small ints? This is on a 2 ghz server with 1gb of ram.
Thanks,
GReg
October 14, 2004 at 12:32 pm
Greg,
Not sure about the time but try to break your transaction into smaller chunks. If your transaction gets killed in between for any reason, you don't end up rolling back everything and also your tran log doesn't get filled up.
October 14, 2004 at 12:55 pm
If this is all the records in the table you could truncate instead in seconds.
The time for a delete depends a great deal on your hardware and the other users in the system. You could try removing 250K rows and estimate from there for the rest.
October 15, 2004 at 10:21 am
Is it easier to pull out the live data into another table and then drop the source table ?
Another option is to write a proc that loops with a wait state in it and delete small sets at a time ...
What is this going to do to your transaction log ??
Craig
October 15, 2004 at 2:30 pm
If you are getting rid of ALL data in the table, use the TRUNCATE TABLE command.
Michael Lee
October 15, 2004 at 3:11 pm
Two comments:
You can't truncate a table with active Foreign Keys.
When I am faced with a larger task such as this in a Production environment, I will typically:
1. Write a query that bulk copies the wanted data in a new table, using DTS usually.
2. Drop the Prod table in the same transaction that you rename the new table.
3. Add any indexes and Foreign Keys back in
I've done this on tables up to 30 million rows that were fairly wide and have had good success. The only downside is if your app server is very picky about losing tables that it expects (We use WebLogic 6 something), and if I drop a table, the app will crash occasionally, so be careful, and good luck!
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply