Data and log file segments and extents and fragmentation, Question?

  • I have database which is about 500GB in size and has performance issue. The database was started as 10GB in size back in early days and by some other DBA. Over the 10 years of time, the database grown to 500GB and set auto growth by 200MB.

    You can imagine how many file segments will be there for data and log file (log is about 104GB with 200 MB auto growth). Data file also seperated in 3 physical files as well.

    1) How do I find out how many file segments are there for data and log? And if they are seralize or not?

    2) Should I combine all data file and create one .mdf with big initial size and restore database? would this remove file segments by auto growth?

    Please help as I need some suggestion on improving this database performance.

    Thank you.

  • You need to follow up and perform a transaction log backup. I am going to guess that your database is set to full recovery model (with a xxxGB transaction log for a xxxGB database.

    I am also going to guess that you do not have regular transaction log backups scheduled, which are required when the database is in full recovery model. This is why the transaction log is so large.

  • We do log backup regular basis but with full recovery this log has been grown out of proportion with daily nightly loading data. In this database we load lots of data everyday and purge lots of them as per the nature of application. Log backups are done on regular interval.

  • "Log backups are done on regular interval.", if you manage a lot of online transactions, you may bkp your transaction log, every 10 minutes for example. And then your log file will be more little.

    After you bkp DB and Log, turn your DB in "normal", and bkp it every time you need.(in Options), then the log will very little.

  • Hi Amu,

    Amu (4/9/2009)


    . Over the 10 years of time, the database grown to 500GB and set auto growth by 200MB.

    Is this the right setting. What is the rate of your data growth? It's always a better idea to set at a large value so that the data file does not grow numerous times which leads to fragmentation.

    1) How do I find out how many file segments are there for data and log? And if they are seralize or not?

    I really did not understand when you meant file segments? Did you mean Fragmentation?

    2) Should I combine all data file and create one .mdf with big initial size and restore database? would this remove file segments by auto growth?

    And also are these files located on the same physical disk? are you experiencing any I/O issues?

    In this database we load lots of data everyday and purge lots of them as per the nature of application

    How come your log is that huge? IF you are doing any bulk load operations then did you think of using bulk logged recovery model? And also do you commit your transactions frequently? What is your backup strategy?

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply