Properly shutting down a SQL2005 db

  • Hello all,

    I have read a few articles on this question and it seems so obvious, but I guess I just need to read it for myself in response to my own question, so here goes...

    In the event I need to reboot my server (Windows Updates, etc.) I would normally net stop mssqlagent and net stop mssqlserver then reboot. Our server starts up and our primary db goes into recovery. It usually takes 2 to 3 hours for this to complete. Note the db that takes all this time is around 190gb in size.

    Is this normal? If not, what specific steps should I take to do it properly?

    Thanks so much for any feedback!

    Speedy

  • It's hard to say if that is normal without knowing what type of system your database supports and your environment. I can tell you what happens when I perform a system restart. I do not shut down agent services\sql server service just restart when the server is inactive.

    I have a 370gb database that if restarted, will be back online in less than ten minutes. Most of that time is shut down and restart of the operating system. So, compared to my database your recovery time would not be normal. Check SQL/Server log. Does it show a great deal of rollback/roll foreward events? I only reboot the server when the database is fairly inactive. Once I rebooted while performing a 10+ million delete because SQL/Server was unresponsive and it took a few hours to recover but that's it.

  • Make sure that there are no connections to the database before shutting down the SQL Services, do this via sp_who2 active

    Make sure that there are no backups etc running.

  • About the recovery time, we typically will reboot on the weekend to perform updates when there are no users in the system. However, it always ends up being 2-3 hours of recovery. I hear differing ideas about how (or even if) to address this, but I will try the sp_whoisactive to see if there are 'lingering' connection or other activities that could cause this.

    We not do this during a backup, as they run overnight.

    Thanks for the tips!

  • Too many vlf's?

    Google: 8 steps to better transaction log throughput

  • About VLF's...frighteningly, I ran dbcc loginfo and it returned over 833,000 rows!

    Could this be contributing to recovery times and how do I get it cleaned up??

  • What does this return?

    sp_configure 'show advance options',1

    go

    reconfigure

    go

    sp_configure 'recovery interval'

    go

    Edit - correction made

  • noyb 30173 (10/15/2011)


    About VLF's...frighteningly, I ran dbcc loginfo and it returned over 833,000 rows!

    Could this be contributing to recovery times and how do I get it cleaned up??

    we've definately found the problem.

    You need to shrink the log file as small as possible and re-grow it in back to its orig size. Kimberly tripp's post has the details.

    Short story is take tran backup, shrinkfile, regrow in 8000 mb chunks to orig size.

    If you can afford to put it in simple recovery, then checkpoint, shrink, grow

    How big is the log?

  • The logfile 'was' over 220gb in size. Thankfully, I've managed to shrink it back to 20gb and recovery now occurs in about 60 seconds...quite an improvement!

    Next I will look at re-growing it, although I suspect it was totally unnecessary to have it that big in the first place...the db this log servers is only 190gb itself!

    Thanks again for all the tips...your advice has been invaluable!

  • noyb 30173 (10/16/2011)


    The logfile 'was' over 220gb in size. Thankfully, I've managed to shrink it back to 20gb and recovery now occurs in about 60 seconds...quite an improvement!

    Next I will look at re-growing it, although I suspect it was totally unnecessary to have it that big in the first place...the db this log servers is only 190gb itself!

    Thanks again for all the tips...your advice has been invaluable!

    I strongly disagree with regrowing the log file in this case. As you said, you just don't need a 220GB log file for a 190GB database. You shouldn't even need an "8000 MB" (8GB) log file for such a database unless you're doing an insane amount of logged batch file work. If you haven't done so already, what you do need is to read up on how to do log file backups so that your logfile doesn't get that big to begin with. Of course, you should also read up on how do do restores, as well.

    If you still have 20GB hanging around in your log file, then you may not have cleared out the logfile using a logfile backup prior to your shutdown.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden (10/16/2011)


    noyb 30173 (10/16/2011)


    The logfile 'was' over 220gb in size. Thankfully, I've managed to shrink it back to 20gb and recovery now occurs in about 60 seconds...quite an improvement!

    Next I will look at re-growing it, although I suspect it was totally unnecessary to have it that big in the first place...the db this log servers is only 190gb itself!

    Thanks again for all the tips...your advice has been invaluable!

    I strongly disagree with regrowing the log file in this case. As you said, you just don't need a 220GB log file for a 190GB database. You shouldn't even need an "8000 MB" (8GB) log file for such a database unless you're doing an insane amount of logged batch file work. If you haven't done so already, what you do need is to read up on how to do log file backups so that your logfile doesn't get that big to begin with. Of course, you should also read up on how do do restores, as well.

    If you still have 20GB hanging around in your log file, then you may not have cleared out the logfile using a logfile backup prior to your shutdown.

    I could see a log needing to grow larger for this size of a database if you are doing re-indexing or system maintenance type tasks...unless you were doing piece meal type of work where you could manage the log backups in-between tasks but that would be cumbersome.

    I have always tried to test out how large the log file would need to get during maintenance tasks (in a test system)...then set the log to that size in production straight up.

  • One thing you must do is fix your autogrow increments so that if the log grows you're not back in this situation soon after.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (10/19/2011)


    One thing you must do is fix your autogrow increments so that if the log grows you're not back in this situation soon after.

    There you go being pro-active again Gail!

Viewing 13 posts - 1 through 12 (of 12 total)

You must be logged in to reply to this topic. Login to reply