DBCCC SHRINKFILE

  • Hi,

    I've no experience to shrink log files. Based on reading, I would like to perform shrink command as following

    DBCC SHRINKFILE (ST3DB_log, 1) WITH NO_INFOMSGS

    My question as following

    1. Did my shrink command is correct?

    2. Once, Command(s) completed successfully, is that a guaranteed no any data is missing?

    Really looking for help

  • (1) It looks correct. But check it against the syntax in Books Online.

    (2) Shrinking a file will not remove any data from it.

    John

  • Why are you shrinking?

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (1/25/2011)


    Why are you shrinking?

    Hello Mam,

    My log files comes bigger day by day. The log files is currently bigger than MDF files

  • Let me guess, full recovery model, no log backups?

    If so, the problem isn't that you need to shrink, it's that you need to properly manage your transaction logs. Please read through this: http://www.sqlservercentral.com/articles/64582/

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Create a Job which daily shrinks your log files when the database is not in use.

    NEVER NEVER EVER GIVE UP;-)

  • scorpianprince (1/25/2011)


    Create a Job which daily shrinks your log files when the database is not in use.

    No, absolutely not! Terrible practice. No file should be shrunk on a regular basis.

    Step 1: Find out why the log is large. See my previous post

    Step 2: Fix whatever caused the log to grow so large

    Step 3: Run a once-off shrink operation to return the log to a reasonable size (NOT 0)

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • If huge transactions are happening and if the large insertions and deletions is a daily process then he don't have any other method.

    NEVER NEVER EVER GIVE UP;-)

  • GilaMonster (1/25/2011)


    scorpianprince (1/25/2011)


    Create a Job which daily shrinks your log files when the database is not in use.

    No, absolutely not! Terrible practice. No file should be shrunk on a regular basis.

    Oddly enough the Syngress 70-432 book has a paragraph on running DBCC SHRINKFILE regularly, as a maintenance task. Which explains why such things can be an uphill struggle to educate the masses.

    BrainDonor

  • scorpianprince (1/25/2011)


    If huge transactions are happening and if the large insertions and deletions is a daily process then he don't have any other method.

    Sure he does. Manage the log properly, size it appropriately for the workload and the frequency of log backups and leave it alone

    Growing the log takes resources, it stops all transactions in the DB if the log is full at point it grows, it must be zero-initialised and so takes IO resources to grow. It is not something you want happening on a regular basis in a busy database.

    Besides, I doubt this is due to 'large insertions and deletions'. Far more likely it's a mixture of full recovery model and no log backups.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • scorpianprince (1/25/2011)


    If huge transactions are happening and if the large insertions and deletions is a daily process then he don't have any other method.

    If huge transactions are occurring daily that grow your log out to a certain size, then that's how big the log should be. The alternate method is to address the size of the transactions so the log doesn't get so big. Either way, you don't just automatically shrink over & over. That's a very bad practice that leads to serious issues, let alone just being a waste of processing time.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Grant Fritchey (1/25/2011)


    scorpianprince (1/25/2011)


    If huge transactions are happening and if the large insertions and deletions is a daily process then he don't have any other method.

    If huge transactions are occurring daily that grow your log out to a certain size, then that's how big the log should be. The alternate method is to address the size of the transactions so the log doesn't get so big. Either way, you don't just automatically shrink over & over. That's a very bad practice that leads to serious issues, let alone just being a waste of processing time.

    Can you give me a steps to address the size as mentioned?

  • Little Nick (1/25/2011)


    Can you give me a steps to address the size as mentioned?

    Set up Transaction Log backups to run every 20 minutes. (more or less based on need)

    Read & follow Gail's advice.

  • Little Nick (1/25/2011)


    Can you give me a steps to address the size as mentioned?

    Did you read the article I referenced? (I'm assuming not because it answers exactly that question)

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • ok. will do

Viewing 15 posts - 1 through 15 (of 34 total)

You must be logged in to reply to this topic. Login to reply