November 8, 2006 at 2:23 am
Dear all,
I ran a "insert into" command for a big table with more than 10,000,000 records. It took too long time and the size of the transaction log file was to exhausted all the hard disk of my system. So I cancel it but the system seemed busy in rolling back now.
How to deal with this? Thanks for your suggestion first.
Thanks,
coby
November 8, 2006 at 4:15 am
Go to the pub.
November 8, 2006 at 4:18 am
Seriously tho, use the kill command to check how far through the rollback you are. (Check BOL if you don't know how). Very rarely I have found a hard reset works, but more often than not all you get from that is the rollback restarting as soon as sql comes back on line.
Unless someone else here has some secret advice (please share!) treat this as a learning experience... batches are your friend.
November 13, 2006 at 10:04 am
There is no 'secret' command. Stopping a long running rollback by erstarting SQL server (or rebooting the server iteslf) will only cause you to expend more time than you already have. The transactions have to be rolled back. If you do try the restart method your SQL Server (all other databases) will be unavailable until the rollback completes.
Your rollback time will minimally be equal to the length of time spent on the insert/update activity that occurred. I have seen the rollback time even be double the initial time.
RegardsRudy KomacsarSenior Database Administrator"Ave Caesar! - Morituri te salutamus."
November 13, 2006 at 10:08 am
The restart sometimes works when the connection has been trashed somehow. Perhaps it is a workaround to something that has already been fixed.
In this circumstance a spid has been killed, and the rollback is 100% complete, but the connection is not around to handle this. It then sits there zombie like with its locks in place until the service is restarted.
If anyone knows how to kill those off properly I'd love to know how! That and how to 'inject' a commit to a spid...
November 13, 2006 at 6:27 pm
wangkhar, rudy,
Thanks for the advice. Do you know how to measure the time I need for the delete or roll back? Or, what is the max records we can endure it to complete deletion within 30 minutes? I know it depends on the hardware status and db desing but I appreciate your help if you have a rough estimate to me.
Thanks,
coby
November 14, 2006 at 4:34 am
The rule of thumb that I use is that it will take at least as long to kill as its been running. Probably longer. Kill <n> with statusonly will tell you about how far through the rollback is.
November 14, 2006 at 4:39 am
As Rudy mentioned, a good place to start is if a process was running for 10 minutes before it was killed, a basic assumption is that it will take 10 minutes to rollback - but that is certainly not set in stone.
To get a rough idea of time to rollback after the spid is killed, run kill <spid> with statusonly
This will give you a percentage complete and number of seconds to rollback.
As for measuring delete times. Well there are a number of factors. Hardware, contention, locks, application/database design etc etc etc. Unfortunately, the only way to get rough timings for deletions is to benchmark your system.
November 14, 2006 at 8:04 am
I have a really, really long rollback in progress. I haven't checked it for a couple of weeks so I signed on again and executed the KILL nn with statusonly command which results in:
SPID 387: transaction rollback in progress. Estimated rollback completion: 100%. Estimated time remaining: 0 seconds.
I have refreshed the current activity screen in EM. Actually I just connected. Its been a couple of months since the kill command was first issued. For the current activity to still show KILLED/ROLLBACK is kind of odd.
Francis
November 14, 2006 at 8:35 am
This _sounds_ like the kind of transaction I was on about earlier, where the restart cleans it out... not that I recommend it, but if you can afford the downtime you could consider it.
June 14, 2007 at 9:27 am
Guys,
I have a similar situation. The query ran for 11 hours, inserting into table A using a DTS package. I killed the query ... its been rolling back 5 hours already and says it will take 7 more.
I created a copy of this table with a different name - table B. I am considering to drop table A and rename B to A.
Did anyone ever try this? Does anyone know if I drop the table on which the rollback is currently happening, will it cause more problems - I can't take risk with this table, as it is currently at least accessible, but could become inaccessible if I do a drop, which might not work as I expect due to the rollback.
Thanks in advance for the suggestions
June 14, 2007 at 10:03 am
I have never tried this approach. However I believe that it wil not work since the rollback process will have a some type of lock on the table. I believe that your sp_rename or drop command will just sit there waiting as well - you can verify this with an sp_who. If by chance it does work I would also uess it has the possibility of corrupting data on an RI level for your application if not on a database level internally. Just spouting random thoughts ...
RegardsRudy KomacsarSenior Database Administrator"Ave Caesar! - Morituri te salutamus."
June 14, 2007 at 10:43 am
Rudy,
I actually just tried it [i.e. table drop]. You are correct. This operation was just hanging as well. After a minute's wait I stopped this query and it seems the table is unaffected. It is strange though. I would assume dropping the table would have highest precedence, since this rollback is only rolling back data in this table ... guess there are deeper things going on here ...
I even did another test (which is why I dared with a drop). I created another table, and an infinite while loop, inserting into this table. After a minute or so, I dropped the table into which constant inserts are going on, and the table dropped immediately. It seems that this case is different.
Thanks for the help!
June 14, 2007 at 1:03 pm
I did this once, it was not pretty. One thing i was able to do (though not sure if it was during the rollback or before it..) i changed the growth segment for the transaction log. This took less time resizing the log file. I also issued truncate log commands which seemed to help.
I'd also check to make sure the rollback isn't being locked by anything else on the server. Try to minimize anything else using the server resources, especially the disk being updated.
Good luck.
June 14, 2007 at 2:35 pm
Thanks for the info uofmoo.
I forgot to mention that this database is not currently being used by anyone (for next 24 hours or so). Does anyone know if I detach and re-attach the database - could that help?
Thanks a lot!
Viewing 15 posts - 1 through 15 (of 25 total)
You must be logged in to reply to this topic. Login to reply