dbcc shrinkfile and deadlocks

  • I have two practically identical ~65GB dbs that I am attempting to shrink during production time. On one of them I'm becoming a deadlock victim, the other sails on for an hour and completes w/o a problem. I'm running this statement:

    dbcc shrinkfile ('MCS_Data',60000)

    The only real difference I can see between the two is the error-prone one has 20G of freespace vs. 4G on the non-error prone one. When I look at the SPID's that are causing the deadlock they are normal user processes that I'm sure are present in the non-prone one. Any suggestions?

  • Hi, i think you should looking for a open transaction never committed:

    (search 'dbcc opentran' in books online)

    DBCC OPENTRAN

    ( { 'database_name' | database_id} )

    [ WITH TABLERESULTS

    [ , NO_INFOMSGS ]

    ]

    and try to rollback it (if exists)

  • First of all, there is rarely a good reason to shrink the DB.  It really makes things ugly on the inside. 

    Next point is one only has 4 Gig free in a 65G DB?  Again..... WHY WOULD YOU BOTHER!  I guess I could understand if you had a 100G DB and 30 - 40G free...

    And LASTLY!!  Why in the world would you do this during production hours?  Do you just like doing things that could make your life miserable? 

    Your statement is OK.  But in best case you are only recovering 5g of 65G.  Why do you have to run this reguarly?  Are you filling, then deleting often?  If so there is a problem with that. 

    You should almost always have at least 10% - 20% free space in the DB for efficient operation.

    I am going to bet that you have autoextend on on the file, and it is set to the default of 10%.

    Your problem is this.  You are shinking the file during usage. Someone needs space, so it extends... By it's recommended size 10%.  10% of 65G is 6.5G...

    Solution:  STOP SHRINKING DB DURING PRODUCTION!   Hope this helps.  Sorry if I was a little harsh. 

  • Bob is correct. The word from MS engineers is don't shrink the DB. Shrink the log if needed, but not the db. Can mess up indexing efficiency (in terms of space), fragment things, etc.

    And I agree, it's not worth recovering so little space. If you need it that badly, better look for more/new drives

  • ok, let me clear up... I'm trying to shrink on the one that has 20 G free in preparation for a db move this coming weekend (I'm satisfied with the one that has 4 G free). The reason we have so much free is that I found a log table that wasn't being maintained so I have instituted maintenance and I believe that is where the 20 G free is coming from. We are trying to limit downtime so that is why I'm attempting during production (and in reality we don't really have non-production time just scheduled maintenance time).

  • First off the command you are using will only recover 5G if you current filesize is 65G.  That aside you shouldn't be getting deadlocks.  Have you enabled Deadlock Tracing? 

    http://msdn2.microsoft.com/en-us/library/ms178104.aspx

    If you have the trace we might be able to get a better idea what the issue is.

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply