Database shrink takes ages

  • Hi,

    I've got a 6 gig database, of which 3 gig is actually data (the rest empty space). I run a weekly shrink on the database, which runs for around 3 hours, but just recently, the job has been failing - for some reason the shrink now seems to be taking well over 8 hours all of a sudden.

    I checked to make sure that no extra processing was running at the same time, have swapped execution times, and ran a dbcc checkdb as well, all to no avail.

    Checking the processes running when shrinking the db, I do notice its locked a new table I created - essentially its an archive of another table with 28 million records in it.

    Any susggestions as to what else to look for here? What processes does the shrink go through. If a table has no index, will this hinder a db shrink?

  • I've been trying to figure out why some shrinkfile commands seem to take inordinately long, and I think it's related to indexes, but perhaps someone can enlighten me.

    I'm guessing (entirely guessing from watching) that when the shrink involves moving a data table that has indexes, it must be going and updating the pointer in all the indexes as it moves items. More indexes, more pointers.

    This seems to be true based on it shrinking some of our filegroups with only indexes quickly, but being VERY slow on some data tables. we have some about 30G or so, and I just have to allocate a full 10 hours or so to get the shrunk after redoing something (e.g. dropping and adding a clustered index, which doubles the used space).

  • Unless you use the TRUNCATEONLY option...

    Def

    quote:


    Causes any unused space in the files to be released to the operating system and shrinks the file to the last allocated extent, reducing the file size without moving any data. No attempt is made to relocate rows to unallocated pages. target_size is ignored when TRUNCATEONLY is used.


    ... then yes the data is moved and thus pointers have to be changed to match the new positioning.

    However I have a few 4 GB DBs and all take less than an hour.

    If could be a locking issue or could be the way the maintainence is done compared to me.

    I do like so with mine just before shrink.

    CREATE PROCEDURE sp_cleanup_fridays

    @DbName VARCHAR(50)

    AS

    SET NOCOUNT ON

    --Rebuild All Indexes in Database See below

    SET @SQLStatement = 'sp_RebuildIndexes [' +@DbName + ']'

    EXEC(@SQLStatement)

    --Update stats information

    EXEC('[' + @DbName + ']..sp_updatestats')

    --Shrink Database to Last Extent in File

    DBCC SHRINKDATABASE (@DbName)

    --Update Datebase Usage to keep running smooth

    DBCC UPDATEUSAGE (@DbName)

    GO

    CREATE PROCEDURE sp_RebuildIndexes

    @DBName as varchar(500)

    AS

    DECLARE @SQLState varchar(5000)

    SET @SQLState = '

    DECLARE TableCursor CURSOR FOR SELECT ''' + @DBName + '.'' + TABLE_SCHEMA + ''.'' + table_name FROM [' + @DBName + '].information_schema.tables WHERE table_type = ''base table''

    DECLARE @TableName varchar(255)

    OPEN TableCursor

    FETCH NEXT FROM TableCursor INTO @TableName

    WHILE @@FETCH_STATUS = 0

    BEGIN

    SELECT ''Reindexing '' + @TableName as DOING

    DBCC DBREINDEX(@TableName,'' '',90)

    FETCH NEXT FROM TableCursor INTO @TableName

    END

    CLOSE TableCursor

    DEALLOCATE TableCursor'

    EXEC (@SQLSTate)

    GO

    No guarantee but like I said this is how I do it and all 36 GB takes about an hour and a half to do all of the above. Of course I am running two DBs at a time when I do it also.

  • After checking the time of creation, I realise that this problem occurred before I made any changes to the database (which makes it more a mystery).

    I also run my index rebuilds before doing the shrink as well. I can try the truncateonly option, but don't know if it will fully solve my problem. ie. it may truncate the size of the db, but the data paging issue won't be corrected.

  • quote:


    but the data paging issue won't be corrected.


    What do you mean here?

    Also, so I can have a better idea of how your machine compares to mine can you post the overall hardware info (2 1GB CPUs, RAID5 array 36GB, etc), what else do you do during the maintainence, what version of SQL Server with Patch level, and are you using maintainence plans.

  • sorry, I mean it will only truncate the log - it won't allocate data to empty pages.

    The system I'm using is on Windows 2000, with SQL 7, and I do use maintenance plans. Speed of the machine is twin 700's with about 700Mb of memory. Disk setup is RAID 5 with around 36Gg, with mirrored disks for operating system.

    The shrink time was set to a slightly earlier time, following optimisations, this was when it first failed. It's running on the weekend, so there's no activity occurring on the box, confirmed with performance monitor logs.

  • So SQL 7 with Maintenance Plans. Do you have the "Repair Minor Problems" option set? If so remove it as that has been a major buggy piece and could be the root issue.

  • Antares

    sorry for not getting back sooner - bean away.

    The minor problems options is under the Integrity checks, and I did have it set, although this is run under a seperate schedule, so I doubt this was the problem.

    I've got the dbshrink running at the moment, and its fairly cracking along now as opposed to before. Total time = just over 30 minutes, whereas before it was taking better than 14 hours (and not finishing).

    Whats changed? I did two things, so I'm unsure of which is the answer (although i have a fairly good idea).

    The first is that the archive table holding millions of records - I added a cluster index to it (where there was no index at all before).

    The second thing - I backed up the database first before running the shrink. In this database, I have a bcp process loading a flat file, and to improve performance, its not logging transactions.

    So, where is the answer? My feeling is the indexing, although backing up the db beforehand certainly could have an impact.

  • I got this information from somewhere, but have never tested it. Hope this will help you.

    ==============================================================

    How can I shrink the transaction log in SQL Server 7 and 2000?

    ==============================================================

    http://www.sqlservercentral.com/faq/viewfaqanswer.asp?categoryid=2&faqid=154

    http://www.sqlservercentral.com/scripts/scriptdetails.asp?scriptid=26&CategoryNm=Maintenance%20and%20Management%20&CategoryID=1

    http://www.sqlservercentral.com/scripts/contributions/666.asp

    Soln1 - Have you tried running DBCC SHRINKFILE with first the NOTRUNCATE option to move all the data to

    the front of the file followed by running it again with TRUNCATEONLY to cause the unused space to be released?

    Soln2 - I have encounter this problem before. Backup your db first, then turn on following database option

    auto close auto shrink truncate log on checkpoint

    Soln3 - According to Microsoft, the way to shrink a log file is to use DBCC SHRINKFILE. This process is painfully slow and inefficient.

    Even after running the procedure numerous times, you may not get much free space.

    Here's another way to shrink your logs that is guaranteed to work every time. First, back up the database and then detach

    (sp_detach_db) it (you will have to bring the database down to do this). Next, delete the log file and then re-attach

    (sp_attach_db) the database, not providing the old log file location from within the sp_attach_db command. This will create a

    new log file with the old log file name in the old location with default size i.e. 512 KB.

    To make sure there is no problem during the operation, the old log file can be renamed and kept until the database is reattached

    successfully. This provides a backup plan if for some reason SQL server fails to attach the database without the old log file.

    This trick won't work if the database has more than one log file, but if you need to, you can alter the database so that it only

    has a single log file, while will allow you to perform the above steps. After carrying out the detach and attach database

    activity, the database can be again altered to add more log files. [7.0, 2000]

    ==============================================================

    paul


    paul

  • Hi all,

    I got a sql2k server, for some reason the trans log file grows to 92 GB, I did dbcc shrinkdatabase (dbname, truncateonly). As respected, the trans log file only got a little smaller. Then I try to use detach

    and attach, before that I setup database

    option to single hoping after full backup it will auto shrink. To my supprise, after about 2 hours, I even didn,t do the backup,when I check the database again, the transaction log file has been reduced to 1 GB. I realy don't understand why this would happen. Could you guys explain? Thanks!

    Robert

  • Hi Robert,

    Obviously your database is logging all transactions running through it - check the options for your database, and you'll probably find the Recovery Model setting is set to "Full", which means its logging all transactions, including bulk loading.

    You can use set the Auto Shrink option - however, that means that you're putting control in SQL's hands, and it will shrink the database (data and transaction log) in a period of quiet activity.

    I prefer to switch the option off,and setup a transaction log backup job, which runs periodically throughout the day. Benefits of this are two-fold a) This keeps your transaction log truncated continually and b) you can do a recovery up to the last transaction log backup (combined with the last full backup). However, your situation might be different and may not require such measures.

    Check the SQL Books Online manual for more detail about the db options on Y2K.

  • Hi Brendon,

    Thanks for your reply.

    I set up the option 'full' and backup the trans log file every hour for years. This is our main server, everyday huge activities should put into the database. Just recent 3

    days after optimizing the trans log file grows very large.(I optimizing the db every weekend) Before when log file to big, I tried move something to network, then I back up the trans log file, and shrink database.

    But this time, the log file is too big, I can't do it as before. (on our network, there is no so big free disk space.) So I tried to change the option to simple, then backup the full database, hoping the log file would shrunk automatically. My

    question is, even before I do the full backup, the log file automatically shrunk to very small. Why would this happened?

    Thanks!

    Robert

  • In SQL 7.0 you use to have a "Truncate on checkpoint" option against the database - this is now the "Recovery option" under SQL 2000.

    By switching to Simple, its effectively executing that "truncate on checkpoint" option, therefore it will cull your transaction log right down immediately. Very useful when you need disk space in a hurry.

    However, "Simple" mode is meant to be used when you don't want to do transaction log backups period. You can only do full backups.

    You're better off running your dbshrink after your optimisations on the weekend. The optimisations will increase the size of your database data file (with free space), since all index pointers will be re-arranged. Running the dbshrink afterwards will rearrange the data pages, thereby reducing the size of the data file, and therefore the database.

  • After the transaction log file reduced, then I change the option to 'full' immediately, and do the transaction log back up.

    Thank you Brendon for your explanation.

    Robert

Viewing 14 posts - 1 through 13 (of 13 total)

You must be logged in to reply to this topic. Login to reply