Unable to shrink large database (data file)

  • I know I know shrinking the data file is bad, but I have a case where I just have to. We have an application that was using about 300 Gigs of space, but they've basically cleared most of the archived data and it's now down to 30 Gigs used though the data file is still at 300 Gigs (270 Gigs free). The users said they won't be storing nearly as much data as they did before, so I don't anticipate it growing to more than 50-70 Gigs. For this I'd like to shrink the database down to 80 Gigs just to be safe and recover 220 Gigs.

    Before starting I did some housekeeping and reorganized/rebuilt all the indexes. Then I ran this in hopes that it'd recover some space, which it didn't:

    DBCC SHRINKFILE (ACCTDB, 80000,TRUNCATEONLY);

    So I tried to do a DB shrink incrementally shrinking just a few gigs at a time, but no matter how small of a chunk I tried to shrink I never made any leeway. The database is currently at 305255 Megs so I ran this just to get started and it ran for over 40 minutes with no change:

    DBCC SHRINKFILE (ACCTDB, 305000);

    I ended up just killing it. If it takes that log just to trim 255 megs what's it going to take to trim 220 gigs?

    So any suggestions? Another option is building a new database and copying everything over, but that's last resort since this is a vendor created database so I'd hate to miss something. Oh, and this is on SQL 2008 SP1 (10.0.2531.0) if that helps.

    Thanks.

  • What index fill factor did you use?

  • What do you see when you check the following DMVs for the session ID of the shrinkfile operation?

    sys.dm_exec_requests

    sys.dm_os_waiting_tasks

    What result do you get when you run the following script in a separate query window while the shrink is running?

    SELECT r.session_id,r.command,CONVERT(NUMERIC(6,2),r.percent_complete)

    AS [Percent Complete],CONVERT(VARCHAR(20),DATEADD(ms,r.estimated_completion_time,GetDate()),20) AS [ETA Completion Time],

    CONVERT(NUMERIC(6,2),r.total_elapsed_time/1000.0/60.0) AS [Elapsed Min],

    CONVERT(NUMERIC(6,2),r.estimated_completion_time/1000.0/60.0) AS [ETA Min],

    CONVERT(NUMERIC(6,2),r.estimated_completion_time/1000.0/60.0/60.0) AS [ETA Hours],

    CONVERT(VARCHAR(100),(SELECT SUBSTRING(text,r.statement_start_offset/2,

    CASE WHEN r.statement_end_offset = -1 THEN 1000 ELSE (r.statement_end_offset-r.statement_start_offset)/2 END)

    FROM sys.dm_exec_sql_text(sql_handle)))

    FROM sys.dm_exec_requests r

    WHERE command IN ('DbccFilesCompact')

  • The reorg was ran with whatever fill factor the indexes had which vary per index. This is a vendor created database so I'd rather not modify them.

    Also I can only run this after hours, but I reviewed the wait times in our monitoring software when I ran the shrink last night, and the wait time is PAGEIOLATCH_SH at one point and SOS_SCHEDULER_YIELD at another point both running the command DbccFilesCompact. I didn't think to investigate the wait times when i ran into this, so I'll monitor it more closely when I run this next which probably won't be until this weekend or next week.

  • 1. When you just started shrinking check whether shrinking is blocked

    2. Check for heavy processes running on your server. If you have multiple instances, check of all of them.

    3. Check for data corruption.

    4. Try to shring just 1 MB for 1st cycle.

    5. Do you have LOB, row-overflow?

  • Set the DB to single user mode then shrink the data file in chunks:

    --Shrink file size in 5000 MB chunks

    --/* -- Set DB in Single User Mode

    ALTER DATABASE ABC_EDW_LANDING

    SET SINGLE_USER

    WITH ROLLBACK IMMEDIATE;

    GO

    --*/

    USE [ABC_EDW_LANDING]

    GO

    DECLARE @nextSize BIGINT, @targSize BIGINT, @strSQL VARCHAR(8000), @msg VARCHAR(100), @currCount INT, @maxExecutions INT

    SET @nextSize = 626134 --in MB, the first size to increment to ( 5000 less than the current data file size)

    SET @targSize = 350000 --in MB, the target size

    SET @maxExecutions = 1000

    SET @currCount = 1

    WHILE (@nextSize >= @targSize AND @currCount <= @maxExecutions)

    BEGIN

    SET @msg = CAST(@nextSize AS VARCHAR(50))

    RAISERROR(@msg, 10, 1) WITH NOWAIT

    SET @strSQL =

    'DBCC SHRINKFILE (ABC_EDW_LANDING, ' + CAST(@nextSize AS varchar(50)) + ')'

    --Can also add TRUNCATEONLY as in DBCC SHRINKFILE (Data_1, 10000, TRUNCATEONLY)

    EXEC (@strSQL)

    SET @nextSize = @nextSize - 5000 --shrink the file in 5000 MB chunks

    SET @currCount = @currCount + 1

    END

    --/* -- Set DB in Multi User Mode

    ALTER DATABASE ABC_EDW_LANDING

    SET MULTI_USER;

    GO

    --*/

  • I think this will depend most on your I/O subsystem's speed.

    But if there's only 30G total worth of data, I think you'd best off just doing a full shrink:

    DBCC SHRINKFILE ( ACCTDB, 1 );

    (at least I've had more luck with that, and assuming that IFI is on).

    Then, afterward, add free space to the file, up to the total space you want. It may run awhile, but it won't prevent other things from happening.

    If this db has too many VLFs, I'd clean those up first, just to make sure the db processes as efficiently as possible during this whole process.

    SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".

  • Prior to SQL 2012, the shrinkfile operation is single-threaded and is kept to a single CPU, you can't do much to speed it up other than ensure you are running it on super fast spindles, or disable all non-clustered indexes before you run it (yes, you'd have to re-enabled them afterwards)

    ______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply