kill transaction v. restart server

  • Good morning,

    Over the weekend, we had an issue where a data cleanup job ran much, much longer than anticipated. There was roughly 26m rows to be deleted, but there is some cascading RI, too, so it's much more data than that. The DB is in simple recovery mode and auto-growth is set to 10% unrestricted.

    Now, rather than contact the production DBA (me), the SQL Server machine was just rebooted. I never got to see if the process was blocked or anything. The only message in the log is to consider using ALTER DATABASE, since the auto-grow was taking too long. When the server came back up and the db wasn't immediately accessible, they rebooted again, only to see the same issue. It was then I was called. After explaining about transactions and recovery, I told them all we could do was wait for it to finish. It finished recovery in about 3 hours, which I didn't think was too bad when you consider the MDF is 125gb, the LDF had grown to 300gb and we had deleted some percentage of well over 26m rows of data.

    So, this got me thinking. Which is quicker - issuing a KILL statement on the process running the DML statement or simply restarting SQL Server? I have a feeling this is a "well, it depends" answer, but I'm curious to know what others think and/or know.

    We're on SQL Server 2005 SP3

    Thanks......

    Peter

  • Same amount of work to do so should be the same. Possibly less to do if kill used as only that one process has to be recovered, not anything else that might have been running as well. Killing the process should always be the first option, at least there is a chance others can still work then.

    a process like this should be batched up and made restartable.

    ---------------------------------------------------------------------

  • Well, in both cases the transaction gets rolled back. Kill would be better than restarting the entire sql server though... restarting the sql server seems kinda pointless to me in this case as it would just add more work for sql server that isnt needed. The rollback is going to take what the rollback takes in both cases.

    If its any comfort, I totally feel your pain - had the exact same thing happen at my work, only it took 17 hours to roll back the transaction in my env..

  • I don't want to echo the response you've already had from other posters.

    I just want to suggest that percentage growth is not generally a good idea. Imagine you set it to 10% and the current file size is 100 Gb: the file has to grow 1 GB and all user activity has to wait for that space to be allocated.

    -- Gianluca Sartori

  • Thanks for the quick responses, folks.....

    @getoffmyfoot - I thought our recovery time was going to be around 12 hours, since that's about how long this thing had been running for.

    @Gianluca - I know the percentage thing is not the best practice. This database has very small transactions and the log is well sized for what we do 99% of the time. This purge definitely falls in the 1%. I think what I need to do moving forward is alter the log ahead of time so we don't waste time/resources with the auto-grow. When it's done, I'll shrink the log file. And I'm going to change that percentage setting, too.

  • If you kill the process it will rollback with the rest of the database available and usable. If you restart SQL then, unless you have enterprise edition, the rollback will occur with the database in the 'recovering' state and completley unavailable.

    You're looking at the same time frame for either, rollback usually takes a bit longer than the operation took up until that point.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • You really shouldn't use a percentage for auto-growth, on either the data or log files. You don't really want that to ever happen, but if it does - you want the growth to be enough so that you can have time to address it later.

    Using a fixed size allows for testing the autogrowth and how long it takes. Then, you'll know if the autogrowth is going to actually cause any problems with your application (e.g. timeouts).

    I would also recommend that you consider changing your purge operation to purge in smaller increments. Instead of growing the log file, then trying to delete 23m rows - rewrite the process to delete 100,000 rows at a time (test and increase/decrease until performance is acceptable).

    Between each loop - you can then perform a transaction log backup. This would free up space in the transaction log to be reused and avoid the growth of the log.

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply