Managing a Very Large Database!Questions !

  •  

    Hi All,

     

     

    I currently have a problem here, its to do with managing space on one of my servers. This server holds a very large database, to be precise 800GB size. We drop tables at some point in time and also shrink afterwards to free up space.

     

    In trying to shrink the database, its taking over 8 hours and the CPU capacity is hitting 100 %, so I had to stop it.

     

     

    The actual space used on the server after running sp_spaceUSED is as follows:

     

    Database size: 805679.38 MB

    Unallocated space: 143749.79 MB

     

    Reserved: 472913752 KB    

    Data: 388724378 KB

    Index Size: 186791342 KB   

    UnUsed: 896523 KB

     

    Number of Filegroups = 1

     

    I have recommended some solutions here, they are as follows:

     

    We have multiple filegroups for different kinds of data

    Indexes be put on a separate filegroup

    Reducing the size of the datafile first so that we can start implementing filegrouping.

    Defragmenting the indexes

     

     

    In the meantime, I need to reduce the size of this 800GB database, before I can start to implement the use of filegroups.

     

    How can I go about doing this and also what recommendations would you all make ?

     

    While attempting to shrink, what is the best command to use in my situation ?


    Kindest Regards,

    John Burchel (Trainee Developer)

  • In your case

    Database size: 805679.38 MB

    Unallocated space: 143749.79 MB

    you have about 18 % of Unallocated free space.

    Try

     DBCC SHRINKDATABASE

    when using this command,

    The important point to take care of is 

    Setting Target % in DBCC

    instead of jumping from 18 % to 5 %.

    Do it in steps when DB is large.

    At first from 18 to 15,15 to 12, 12 to 10

    this way process will get completed in less

    time and you will be in control of process.

    Another point to note is

    DBCC SHRINKDATABASE operations can be stopped at any point in the process and any completed work is retained.

    Hope this helps,

     

     

     

    Kindest Regards,

    Sameer Raval [Sql Server DBA]
    Geico Insurance
    RavalSameer@hotmail.com

  • Thanks for the reply, I didnt quite understand you when you said do it in steps, do you mean

    dbcc shrinkdatabase(parameter,10)

    then next is

    dbcc shrinkdatabase(parameter,12)


    Kindest Regards,

    John Burchel (Trainee Developer)

  • By the way If you are expecting it to grow I would not shrink it at all!! you will incur in physical fragmentation next when it fills up and you expand it.

    Cheers,

     


    * Noel

  • I would agree with Noel. Some time ago from our 900GB database we archived about 200gb and decided to defrag database. Defrag ran for over 20 hours and it was no way to know the end time. So we cancelled that. The only alternative to that is shrinkfile ( I assume your db consists of multiple filegroups), which was taking about 5 hours on average on each of the filegroups. After few months we reached again our 900GB. Not to mention physical fragmentation to the system Noel mentioned. So the only advantage in our case was that we were not worrying about space issues when restoring that database to the other servers for that time....

  • correction: defrag database should read as dbcc shrinkdatabase 

  • When you say you need to shrink the database before you can create filegroups, you imply that you don't have enough disk space and adding disk space is out of the question.  Hardware costs money, but it is very difficult and time-consuming to rearrange almost a terabyte of data in place.  Would it be possible to even lease some attached storage for a month or two to give you some room to work?

    If instead you must solve the problem with a lot of time and aggravation instead of hardware, you might consider dumping the largest tables to flat files with BCP OUT, truncating them, run DBCC SHRINKDATABASE on the leftovers, create the new filegroups and move tables & indexes into them, and reload the tables with BCP IN.  This obviously requires dropping and restoring foreign keys and a validation procedure (rowcounts?  checksums of every row?) to prove that all data was restored.  (In addition to the verified backup and long maintenance window you're going to need no matter what approach you use.)

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply