December 17, 2009 at 9:59 am
Hi,
I have a table of more than 6 million records and on daily
basis need to delete nearly a million records and insert a million records. The table is cluster indexed.
process takes around 1 n 1/2 hour to complete.
Pl guide how to reduce the processing time or any other alternative.
Thanks in advance
Gugan
December 17, 2009 at 10:07 am
Hi
Please provide some more detailed information how you handle your data deletion. Do you work with a cursor?
Greets
Flo
December 17, 2009 at 10:20 am
Here it is
Table 1 has 200 columns and 6million records with column1 is clustered index.
Table 2 has 200 columns and 1 million records (with updated details n addtional records)
Eg..
DELETE FROM table 1 WHERE column1 in (SELECT Column1 FROM Table2)
INSERT INTO Table1
SELECT * FROM Table2
whether insert will be slow in indexed table.
Note : i don use cursor for the above process..
December 17, 2009 at 10:28 am
not that a million records a day is a lot, but here's a question for you:
are the million records at a time date related? ie since you are doing this daily, are your 6 million row table has the last 7 days of data in it?
if that was true, you might consider creating 7 tables, one for each day, and replacing the current table with a VIEW of the same name; the view would be a UNION ALL of the seven tables.
there's a few strategies you could then use to improve the performance, ie create a new table with todays inserts or change the view to use the new tables, it depedns on what details you can tell us on how this is set up.
Lowell
December 17, 2009 at 10:52 am
Are you using SQL 2005 enterprise edition?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
December 17, 2009 at 11:08 am
yes champ...
December 17, 2009 at 11:18 am
Then consider table partitioning if this kind of delete is a regular operation. If you get the partition function and column appropraitely configured, a massive delete can be done as a meta-data operation, essentially a drop of part of the table.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
December 17, 2009 at 11:34 am
I recently encountered an issue where a one-record insert into a 2 million record table took 45 seconds due to clustered index fragmentation. Performing DBCC INDEXDEFRAG (which does not require downtime) on the table reduced the time to less than 1 second. This MSDN article discusses this command.
http://msdn.microsoft.com/en-us/library/aa258286(SQL.80).aspx
Also, DBCC SHOWCONTIG is a useful diagnostic for this, it returns index page and fragmentation statistics.
December 17, 2009 at 12:02 pm
Both DBCC INDEXDEFRAG and DBCC SHOWCONTIG are deprecated on SQL 2005, should not be used in new development and will be removed in a future version of the product.
The replacements are
ALTER INDEX .. REORGANISE
and
sys.dm_db_index_physical_stats
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
December 17, 2009 at 12:13 pm
Hi
DELETE FROM table 1 WHERE column1 in (SELECT Column1 FROM Table2)
Do you really not have any criteria you provide from your outside table1 into your sub-query?
Did you check your execution plan?
If you don't have any input criteria, this statement should be fine. In this case, do you have some BLOB columns in your table?
If you have any input criteria from your outside statement into your sub-query this might (and usually will) cause an inline cursor. In this case try to use a NOT EXISTS or a joined delete statement.
Greets
Flo
December 17, 2009 at 12:14 pm
Thanks for clarifiying that for me, gilamonster; I was loosely aware these were deprecated (though as archived in my mind, for 2008) but not on the ball enough enough to remember to mention it in my post.
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply