Shrink a datafile

  • I have a datafile allocated total more than 80GB, it shows from EM that 52GB is used and 37GB. After shrink this datafile from EM with option of 'truncate free space from the end of the file', 10GB was getting released, left 27GB free on this datafile.

    I then ran the shrink datafile again on EM with option of 'shrink file to ' a min number which is the size of the used space. I thought this option will reorg any 'de-fragmented' space and shrink/release the rest 27GB free space.

    After the process started, I found it stays always in 'sleeping' status. How do I know this process is running or it is doing nothing?

    Thanks

  • And continue another quesiotn:

    If using shrink option of 'Shrink file to' a specified number on EM, Do I need to set a restrict acess mode (signle mode) on this database to not allow users to change data?

    If the sleeping status may be caused by a lock, it is possible the users processes to change data would lock this shrink process? 

    Thanks for all the inputs!

  • The reason the first shrink left 27GB is because these are not the last 27GB at the end of the file.

    After this I have found that attempting to shrink files of similar sizes takes a long time on live databases. However I have never seen it deadlock. I think it gets easily blocked.

    How long has it been running. I have left 40GB to 30GB to run overnight before.

  • Thank you for your input!

    Yes, I ran this shrink within a live database, so it is in sleeping status in most of the time - (blocked by other processes?).  I terminated this process after it ran 5 hours without results.   

    Is there any best way to shrink the free space which is not in the end of the file?

    Or do we need to set the database with single mode to restrict the users’ access to speed up this process?

    Thanks for the helps

  • Part of what you need to think about is why you want to shrink the file.

    Why did the data grow to 80GB?

    Will that happen again?

    It is generally best to keep some free space in the data files to allow for changes without having to auto-grow.

    Growing and shrinking will cause physical fragmentation on the disk and so is best avoided.

    Personally I have found that shrinking by small amounts can be effective with out having to take the file offline.

    I.e. you have a 70GB file at the moment.

    Try to shrink it to 67GB. This will require a lot less reorganization than to 55GB or 60.

    Then do 65GB 63GB etc.

    This technique has worked in the past but I am only guessing at what is going on under the covers.

    single user mode makes sense but I have never tried it as the need for space rarely justifies outage in my environment.

    Another thing to consider is does the file contain indexes?

    These require space in order to be rebuilt / defragged. When the operation is over the space will appear as free in the datafile. however the space will be needed later in the maintenance cycle so you might as well keep it allocated to the data file.

  • Thanks for the suggestions!!

    The shrink I am doing is to cleanup an application issue which made a normal size of 10GB database unexpected growth to 70GB. The app admin is purging hte dup. data and the free space needs to be released for other use.

    The datafile get fragmented a lot due to purging a big amount of data. It is why teh most of the free space is not located in the end of the datafile. To release these free space, shrink/reorg needs to be done ...?

    in EM when choose option "shrink file to" to shrink file, it seems you only can shrink to the used size. You cannot type other number. Is this restriction only for EM?

    Thanks

  • Use Query Analyser to run the sql against the desired db.

    From Book On-line:

    DBCC SHRINKFILE

    ( { file_name | file_id }

    { [ , target_size ]

    | [ , { EMPTYFILE | NOTRUNCATE | TRUNCATEONLY } ]

    }

    )

    I always run:

    SELECT * FROM SYSFILES

    to get the file id and then run something like

    DBCC SHRINKFILE (2, 60000)

  • I agree entirely with Richard.

    Firstly, only shrink datafiles if you absolutely need to (and I accept that it sounds as if you do need to )

    There are 2 levels of fragmentation to be aware of:

    1) Table fragmentation (which will result from your archiving/delete process, and which must be fixed by rebuilding clustered indexes or similar), and

    2) Windows File fragmentation, which is datafile fragmentation on the disk itself. This will be caused by repeated filegrowth and file shrinkage. Typically, best practice is to create a database with as much free space as you can afford, and never shrink it. Set the autogrow parameters to add large amounts of disk space each time.

    If you do have to shrink datafiles, be aware that the process will usually be slow (much slower than for transaction logs). Like Richard, I prefer the DBCC SHRINKFILE method rather than going through Enterprise Manager.

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply