DBCC Showcontig Result (Logical Fragmentation)

  • I have a Database around 2.5 GB. Its an OLTP database. Hence there are lot of modifications done on daily basis to one of the main table.

    As a result of this the transaction log grows up every few days. Also the queries run slow after 4-5 days. Hence i have been reindexing the database every 4-5 days to speed up the process.

    Now following are the list of steps that i take.

      I do all this during off peak time

    1) Shrink the log file

     BACKUP LOG [Databasename] WITH TRUNCATE_ONLY  

     DBCC SHRINKFILE(Databasename_Log,1,NOTRUNCATE) 

     DBCC SHRINKFILE(Databasename_Log,1,TRUNCATEONLY) 

    2)

    run DBCC SHOWCONTIG command on the table that is heavily modified, to check the fragmentation level.

     DBCC SHOWCONTIG(tablename)

    3) After i get the results and if i see that the table needs to be defragmented and the indexes need to be rebuilt, i run the DBCC DBREINDEX command.

     DBCC DBREINDEX(tablename,'',0)

    The execution time of rebuilding all indexs on this table takes around 15-20 mins.

    4)

    After the query has been executed, i again run the DBCC SHOWCONTIG command to see if the table has been defragmented correctly or not.

    The things that i see to make sure that the table is fragmented as i also read in BOL are

     a) the value of Extent Switches should be as close to value of Extent Scanned

     b) Logical Scan Fragmentation and Extent scan fragmentation. Their values should be

        close to 0

     c) Avg. Bytes free per page which should be low and Avg. Page density (full) which

        should be high

    5)

    Once the indexes are rebuilt the data file increases by 2 GB and log file size increases

    by 800 MB to 1 GB.

    Hence the first thing that i do is shrink back the transaction files with the same command that i have shown in step 1

    6) After this i check the database size and unallocated space by running following command

    sp_spaceused @updateusage = 'True'

    After this command is execute, i see a large size of unallocated space.

    (i don't have exact idea how does this unallocated space works. Is it generally high? If some one could also explain me on this i would appreciate it)

    7) Since i am not very sure on the unallocated space or whether the size should be kept what  it shows, i just go ahead and also shrink the data file with follownig command

     DBCC SHRINKFILE(Databasename_Data,1,NOTRUNCATE) 

     (it takes like around 15-20 mins. to run execute this command)

     DBCC SHRINKFILE(Databasename_Data,1,TRUNCATEONLY)

     

    8) After this when i again check the space used  sp_spaceused @updateusage = 'True'

       the size in the unallocated space column is reduced drastically also reducing the total

       size of the database file back to what it was orignal

    9) After this i update the statistics manually (sp_updatestats)

    10) After the statistics have been updated, i recompile all the necessary stored procedures

        and triggers.

    All this is done within max 1 hour.

    Now i had couple of questions on this.

    1) Am i doing this the right way? Is there any other way to rebuild the indexes, increase the query speeds and have the database size limited

    2) One thing that i note is that after i complete all these steps, i again run

    DBCC SHOWCONTIG command for the table to make sure that the index building process hasn't been messed up after that successive steps that i took and one thing that i see being changes is the 'Logic Scan Fragmentation' which goes up back to around 95%. The rest of the things remain same.

    Does this mean that after i shrinked the data and log files and updated the statistics, my data again got fragmented.

    If so then i would have to run the DBCC DBReindex command again but that again that would

    increase the log file and data file which i would have to shrink. So its just like a loop.

    Hence what is it that i am missing in this? Can anyone let me know about this

    3) I have been following these steps by reading from various places and they seem to be working but i am not sure if this is the best way. Is there are place where i can read the necessary steps for database maitainence and day to day activities?

    Any help would be appreciated

    Thanks

  • This was removed by the editor as SPAM

  • That's a lot to cover in one answer

    I'll start with the first point and then maybe move onto the rest when i get time later.

    From the sounds of things you're not performing regular transaction log backups, which means in the event of a crash you can only restore to the last full backup.

    I would never want to shrink the logfile, it causes OS fragmentation when it grows and indicates that you're allowing it to get out of control.  If it were me i'd look at creating it at an appropriate size (no idea what because i don't know your system) and then schedule regular transaction log backups (via maintenance plan if needs be).

    Hope this helps, i'll try to come back later and have a look at the other steps

    Mike

  • I agree with Mike, the first step is to get your transaction logs under control.

    Here are the basic steps (at least in my opinion)

    1 .Create a job that runs to backup your log file (You will need to determine how often). (We store ours on a dedicated backup server –then delete all the log files once we have a full DB backup at night)

    DECLARE @vcLogPath varchar(300)

          , @vcLogFile varchar(1000)

          , @vcDate varchar(8)

          , @vcTime varchar(8)

          , @vcDBName sysname

          , @vcServer sysname

          , @vcDB sysname

    /* In case server or DB name has a \ in it, replace with - */

    SELECT @vcServer = Replace( @@ServerName, '\', '-' )

         , @vcDB = Replace( DB_NAME(), '\', '-' )

    SELECT @vcLogPath = '\\BACKUP_SERVER\' + @vcServer + '\' + @vcDB + '\TransLog\'

         , @vcDate = CONVERT(VARCHAR(8), GetDate(), 112)

         , @vcTime = CONVERT(VARCHAR(8), GetDate(), 114)

         , @vcDBName = DB_NAME()

    SELECT @vcTime = SUBSTRING(@vcTime,1,2) + SUBSTRING(@vcTime,4,2) + SUBSTRING(@vcTime,7,2)

    /* Now build the name up */

    SELECT @vcLogFile = @vcLogPath + @vcDB + '_TransLog_' + @vcDate + '_' + @vcTime + '.bak'

    BACKUP LOG @vcDBName

       TO DISK = @vcLogFile

    1alt. If you have a good backup you can switch your DB into SIMPLE recovery mode (remember to switch it back to FULL when your done)

    SELECT @vcCMD = 'ALTER DATABASE ' + DB_NAME() + ' SET RECOVERY SIMPLE'

    EXEC( @vcCMD )

    2. I normally try and loop through the indexes that need the most help by running DBCC DBReindex for the ones that have the lowest Scan Density and the Highest Extent difference (extent difference is defined as [ExtentSwitches – Extents] – I like to keep my Extents either equal too or less than ExtentSwitches) – I only get one shot a month at fixing indexes.

    NOTE: You need to pay attention to the AvgPageDensity too.

    3. Once the DBReindex step is completed we execute DBDEFRAG on all of the other indexes. If you’ve not set your DB in SIMPLE mode, you could always call the job that backups your logs by: EXEC sp_start_job @job_name = 'Backup log'

    Hope this helps a little…

  • thanks for the replies,

    Currently i am running Standard version of MS SQL Server 2000 on Windows 2000 Server.

    The major problem that i am facing is that the O/S divided into 2 partitions. drive C (for O/S system files) and drive D (for MS SQL Server).

    The size of C drive is around 3 GB and size of D drive is 12 GB. (i know its less but thats what it has been, even before i came in so there is nothing we can do as of now).

    Hence as of now what i do is that i have created jobs that take backups (full) of databases on daily basis during off peak time and then i have made a batch file that i run in schedular that would move those backups to a backup server.

    I am not able to take a direct backup to a backup server since its giving me some kind of permission problem (even though i have allowed all the rights to the account on the backup server).

    So under these conditions if i don't truncate the log file then my log file would be filled up pretty soon.

    I know its not a good practice to keep truncating the log file every now and then since it adds extra load on I/O but right now i don't see any other option.

    I'll have to talk with my network admin to find out for what reasons i am not able to take a direct network backup even after giving all the previleges.

    If there is some other way through which i can keep my data and log files in control and still reindex them successfully every few days, let me know.

    Thanks

     

     

  • As was stated earlier, you have to set up a regular transaction log backup plan. BOL says:

    "When SQL Server finishes backing up the transaction log, it automatically truncates the inactive portion of the transaction log. This inactive portion contains completed transactions and so is no longer used during the recovery process. Conversely, the active portion of the transaction log contains transactions that are still running and have not yet completed. SQL Server reuses this truncated, inactive space in the transaction log instead of allowing the transaction log to continue to grow and use more space.

    Although the transaction log may be truncated manually, it is strongly recommended that you do not do this, as it breaks the log backup chain. Until a full database backup is created, the database is not protected from media failure. Use manual log truncation only in very special circumstances, and create a full database backup as soon as practical."

    So, if you set up a job to backup the log every hour or so (more or less, depending on the amount of transactions that take place over the course of a day), you should not have a problem with the Log file growing out of control. If you stop shrinking it every day, it will find the right size and remain fairly constant.

    SECONDLY, Don't shrink the data file.

    When you do this after you have re-indexed the database, you are undoing alot of your re-indexing. when you shrink your data file, files are moved around in order to save space and they may become fragmented.  It's a bad practice; especially on a daily basis!

     

    Regards, Jim C

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply