December 28, 2007 at 10:14 pm
HI ,
I use SQL Server 2005,my SQL SERVER is window 2000,memory is 512MB.
I have delete ten millons records from a table with SQL Server Management Studio,the table has twenty millions records. it was alway executing after more than two hours,and the Buffer Cache Ratio very hight, the whole SQL Server very slow,so i cancled the prosess, After that,the state of the database is display ' in recover', I can not do anything with the database,I tried to restart the SQL Server ,the state of the database is display ' in recover' too.
so someone can help me to resolve it?
Any advice as always very much appreciated.
----------
best regard!
top
come from China
December 29, 2007 at 9:07 am
First, the database will take a while to "recover"... it'll take less time because you rebooted it as it won't let anyone in until it's done... so no blocking of the rollback.
Second, you need to write a While Loop to delete the rows in smaller chunks and have a 1 to 10 second WAITFOR DELAY in the loop. This will do two things... it'll allow other processes to run without much blocking and it will allow you to abort the process without having a week long rollback.
During the delete in the loop, put a WITH (TABLOCKX) in the code to make sure that the delete process can get in and out without being blocked... and, chances are, the table is going to lock for the delete, anyway.
I wouldn't delete more than about 1 second's worth of rows at a time (about 25,000 rows)... if there are delete triggers on the table, you may have to take smaller chunks than that.
--Jeff Moden
Change is inevitable... Change for the better is not.
December 29, 2007 at 9:10 am
By the way... I call these While Loops that do deletes, "Delete Crawlers" because they "crawl" through the rows to be deleted, mostly unnoticed. Here's the basis of how to make one... obviously, you'll need to change column names and table names to suit your requirements...
--===== Example "Delete Crawler" written by Jeff Moden
--===== Define the cutoff date with a time of "midnight" or, if you will,
-- define the cutoff date with no time so we only delete whole days.
DECLARE @CutoffDate DATETIME
SELECT @CutoffDate = DATEADD(dd,DATEDIFF(dd,0,GETDATE()),-94)
--===== Limit all further queries, including deletes, to 25,000 rows
-- (about 1 second worth of deletes, like I said before)
SET ROWCOUNT 25000
--===== See if any rows qualify for deletion. If even just one exists,
-- then there's work to do and @@ROWCOUNT will be > 0.
-- DO NOT PUT ANY CODE BETWEEN THIS SELECT AND THE WHILE LOOP OR
-- YOU'LL MESS UP THE ROWCOUNT FOR THE WHILE LOOP
SELECT TOP 1 1 FROM dbo.JBMTestDetail WHERE Time_Stamp < @CutoffDate
--===== If the rowcount from the above is greater than 0,
-- then delete 25,000 rows at a time until there's nothing
-- left to delete
WHILE @@ROWCOUNT > 0
BEGIN
--===== Just a "marker" to separate the loop in the output
PRINT REPLICATE('=',78)
--===== This delay gives other processes breathing room
WAITFOR DELAY '00:00:10'
--===== Do the delete. Will be limited by the SET ROWCOUNT above.
-- DO NOT PUT ANY CODE BETWEEN THIS DELETE AND THE "END" OR
-- YOU'LL MESS UP THE ROWCOUNT FOR THE WHILE LOOP.
DELETE dbo.JBMTestDetail WITH (TABLOCKX)
WHERE Time_Stamp < @CutoffDate
END
--===== Restore the ability to process more than 25,000 rows
SET ROWCOUNT 0
--Jeff Moden
Change is inevitable... Change for the better is not.
April 4, 2011 at 12:35 am
You may be interested in an article on the this topic by me: Deleting large number of rows from a table & a heap in a VLDB[/url]
Thanks & Regards,
Nakul Vachhrajani.
http://nakulvachhrajani.com
Follow me on
Twitter: @sqltwins
April 26, 2011 at 4:43 am
Thanks for sharing the script Jeff and a superb suggestion.
Manu
April 28, 2011 at 6:48 pm
Thanks, Manu. 🙂
--Jeff Moden
Change is inevitable... Change for the better is not.
April 29, 2011 at 3:48 pm
Jeff,
I looked at your Delete Crawler. I've implemented a job similar to it but a little different, and I'd like to know if I've missed something or have neglected to consider something that might be important.
I read an article in Books Online on lock escalation. If you search in the Index section on "Coarse-grain locks", you will find it. It said that the threshold for a table lock is the modification of 5000 rows.
So, when I wrote a my "delete crawler", I wrote it to delete less than 5000 rows, wait a few tenths of a second, then repeat, until all rows meeting the deletion criteria were deleted. I specifically implemented it this way to avoid table locks, since the first execution of this job (on a very neglected database on an OLTP server) took over 6 weeks to complete executing 7x24.
I noticed that your Delete Crawler deletes 25,000 rows at a time, and explicitly forces a table lock.
Is there an advantage to your implementation? Is there a disadvantage to mine? Is there any comment you might make about an important difference between the two? For instance, would your implementation execute faster than mine?
Thanks,
LC
April 29, 2011 at 6:43 pm
Thanks for the question. I wouldn't say that either has an advantage over the other. As usual, "It Depends". The reason why I went for the exclusive table lock is two fold. 1) was to ensure that I wasn't setting up to delete something that someone was just shy of using (an "overly cautious move" for sure) and 2) in theory (I've not double checked) it takes less time and resources to establish 1 table lock than it does 5,000 row locks. My thought was I could quickly get in and out between uses of the table.
--Jeff Moden
Change is inevitable... Change for the better is not.
April 29, 2011 at 7:01 pm
Thanks for your response.
I hadn't considered the time penalty for executing almost 5000 row locks.
LC
May 3, 2011 at 12:08 pm
"SET ROWCOUNT" will be deprecated at some point, so I now use "DELETE TOP (2500) TableName"
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply