November 21, 2010 at 11:22 pm
i have a table with more than a 30 million Records in it.
because of soem performance related issues we need to delete some unwanted records (around 15 millon)from it and we have identified the filter conditions to remove the same. regarding this i need help on the following.
1) What are all the stpes that i need to take care when we are deleting such huge records from the database
2) How to estimate the time that is going to take when deleting such a huge records.
thanks in advance.
November 21, 2010 at 11:32 pm
If you have foreign keys, constraints etc it will take a longer time to delete records from sql table.
Also it will be better I guess to disable indexes (delete indexes) and then after deletion of unwanted rows recreate the indexes again. This will save server to update the indexes after every DML operation on the table.
November 21, 2010 at 11:58 pm
Eralper (11/21/2010)
If you have foreign keys, constraints etc it will take a longer time to delete records from sql table.Also it will be better I guess to disable indexes (delete indexes) and then after deletion of unwanted rows recreate the indexes again. This will save server to update the indexes after every DML operation on the table.
Thanks for ur Quick Reply, The table does not have any relationship with other tables.
as u said there is an index on the table, so i need to delete the index and recreate it back once the delete operation is done.
what about log file? will it becomes huge in deleting these records?
November 22, 2010 at 12:06 am
Yes, that is an important topic.
If you could truncate than there will be no problem. But since you want to keep some of the records in the table, then DELETE operation will be done. So log file will have entries for each delete command.
You can also alter the recovery model of your database to keep the log file size smaller.
You can change it to simple recovery model if it is set to full recovery model now.
November 22, 2010 at 12:12 am
Hi,
Once you delete the 15 million records from that table. then you cannot reuse the unused space and also the db size will not be decrease.If you want to decrease the db size. then you have to shrink the data file. So that only it will recover the unused space.
Thanks
Balaji.G
November 22, 2010 at 12:13 am
You can also backup transaction log and reduce the size by deleting after 100000 rows for example, then continue with an other 100000 delete, etc.
November 22, 2010 at 12:49 am
You can get around some of the logging issues by using a SELECT/INTO to copy only the rows you want to keep and then renaming the table(s). Read up in Books Online on how to make SELECT/INTO minimally log.
--Jeff Moden
Change is inevitable... Change for the better is not.
November 22, 2010 at 12:56 am
Thanks Jeff,
I checked the BOL and I found :
The amount of logging for SELECT...INTO depends on the recovery model in effect for the database. Under the simple recovery model or bulk-logged recovery model, bulk operations are minimally logged. With minimal logging, using the SELECT… INTO statement can be more efficient than creating a table and then populating the table with an INSERT statement.
So I think populating a new table and then truncating the other might be more efficient when log size is considered.
November 22, 2010 at 1:26 am
Thanks to Jeff and eralper on your valuable Inputs.
November 22, 2010 at 6:10 am
Eralper (11/22/2010)
Thanks Jeff,I checked the BOL and I found :
The amount of logging for SELECT...INTO depends on the recovery model in effect for the database. Under the simple recovery model or bulk-logged recovery model, bulk operations are minimally logged. With minimal logging, using the SELECT… INTO statement can be more efficient than creating a table and then populating the table with an INSERT statement.
So I think populating a new table and then truncating the other might be more efficient when log size is considered.
I agree. Just to be clear for folks that may think of doing it, don't ever shift from FULL recovery to SIMPLE recovery for the sake of archiving data because it will break the log chain for backups. It's ok to shift from FULL recovery to the BULK-LOGGED recovery, though.
--Jeff Moden
Change is inevitable... Change for the better is not.
November 22, 2010 at 6:19 am
Jeff Moden (11/22/2010)
I agree. Just to be clear for folks that may think of doing it, don't ever shift from FULL recovery to SIMPLE recovery for the sake of archiving data because it will break the log chain for backups. It's ok to shift from FULL recovery to the BULK-LOGGED recovery, though.
But I think it is recommended that you take a Log backup before switching from FULL to BULK-LOGGED.
--------------------------------------------------------------------------------------------------
I am just an another naive wannabe DBA trying to learn SQL Server
November 22, 2010 at 7:57 am
Sachin Nandanwar (11/22/2010)
Jeff Moden (11/22/2010)
I agree. Just to be clear for folks that may think of doing it, don't ever shift from FULL recovery to SIMPLE recovery for the sake of archiving data because it will break the log chain for backups. It's ok to shift from FULL recovery to the BULK-LOGGED recovery, though.But I think it is recommended that you take a Log backup before switching from FULL to BULK-LOGGED.
Not that I know of. Except for the ability to do minimal logging, it doesn't make a difference to "normal" data or logs and will not affect the log chain.
--Jeff Moden
Change is inevitable... Change for the better is not.
November 22, 2010 at 10:31 pm
Just few things to consider:
When deleting data probably for archiving from VLTB’s, delete data based on the PK of the table and not on the FK.
Rebuild the index subsequently.
November 22, 2010 at 10:45 pm
Sachin Nandanwar (11/22/2010)
But I think it is recommended that you take a Log backup before switching from FULL to BULK-LOGGED.
Yes, recommended but not essential. This maximizes the window where a point-in-time restore is possible. For the same reason, also backup the log after switching from BULK_LOGGED back to FULL.
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
November 22, 2010 at 10:51 pm
Since this is in the SQL Server 2008 forum, you could also use bulk-logged INSERT...SELECT then ALTER TABLE...SWITCH.
The technique is outlined in the Data Loading Performance Guide
It's the same basic idea as SELECT...INTO followed by a rename, but a bit more modern.
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
Viewing 15 posts - 1 through 15 (of 19 total)
You must be logged in to reply to this topic. Login to reply