Optimize speed of maintenance tasks

  • Our MDW database (Microsoft Management Data Warehouse) has grown to 300 Gb and will propably not grow much bigger. The database is stored on a SQL 2008 R2 server with SAN Storage.

    I'd like to shorten the time it needs voor a full backup, restore, Index maintenance and DBCC CHECKDB.

    The database uses only one Filegroup. Will splitting up this Filegroup into multiple files on different LUN's speed up the backup, restore, reindex and checkdb tasks?

    (For the backup I can also create multiple backup devices on different disks.)

    Regards Marco

  • Start by only rebuilding what you need to rebuild, not everything.

    Use backup compression if on Enterprise or Standard edition

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Gail,

    thanks for your reply.

    We already do a mixture of reorganize and rebuild of indexes based on their fragmentation level

    I see that also the standard edition of SQL 2008 R2 supports backup compression, I didn't know that. For this server this certainly is an option.

    Currently the backup + reindex + DBCC CHECKDB takes about 7:00 hours to complete. This is still within our maintanance window, so we don't really have a problem right now.

    I like to be prepared for databases which are larger than this one (and running on a version which doesn't support backup compression).

    I posted this question, because I wanted to know what the experiences of other DBA's are.

    Thanks,

    Marco

  • Since you don't have the Enterprise Edition, you can't do table partitioning. However, you can still build Partitioned Views where you can separate the older, mostly static data from the newer data. The older data usually won't need any index maintenance and only casual checks compared to the new data.

    If you have such a "partioning scheme" between older and new data, you may want to make an "Archive" data base where you store the older data and move new data into it only, say, once a week or once a month. This may allow you to skip backups except when moving new data to the archive on the archive data because nothing changed. It'll also shorten your index maintenance because the newer data is likely the only thing that would frag the indexes.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff,

    the solution you suggest is a possibility for application I (or my company) have created. This application (MDW) is created by Microsoft, so it refers to the table name they have coded in their application and not the partioned view I can create.

    All I want to know is, does splitting up my database file accross multiple disks give extra performance or doesn't this help a lot.

    Thanks for your reply,

    Marco

  • mchofman (2/14/2012)


    All I want to know is, does splitting up my database file accross multiple disks give extra performance or doesn't this help a lot.

    It might, or it might not. Depends whether your bottleneck is IO or something else. Depends whether you have independent IO channels to the disks, depends which portion of the IO subsystem is the bottleneck (if you have confirmed that the bottleneck is IO)

    Personally I'd go for a combo of backup compression (3rd party tool if you have a version of SQL that does not support it natively), rebuild just what's needed (and maybe start dropping fill factors so that indexes don't fragment as fast) and maybe restoring the DB elsewhere and running checkDB on the restored copy to start.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • mchofman (2/14/2012)


    Jeff,

    the solution you suggest is a possibility for application I (or my company) have created. This application (MDW) is created by Microsoft, so it refers to the table name they have coded in their application and not the partioned view I can create.

    The underlying tables would be named something else but the partitioned view would be named the same as the original table. The app wouldn't have a clue as to what you've done.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • However you do need to check the licensing. Some apps (sharepoint as an example), you cannot tamper in any way with the schema without completely losing support.

    I am curious, why the worry about Managment Data Warehouse, which is a DB of performance and related data for various servers? It's not what I'd call a production app, users are likely to be the IT staff, not business users, and it's not critical to the business.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Gail,

    I don't worry that much about the MDW, it is just one of our "larger" databases and because it is only for supporting the IT staff it is a nice target for trying to get a better performance, so that we get more experience with how to handle large databases.

    Currently our large databases are all databases used for monitoring (SCOM, ForeFront, MDW), but the business users are also starting to use databases which will become large (FileNet applications for document management which will become 4 TB, EMC SourceOne email-archiving which will become 35 TB).

    - Backup compression is something I'm certainly gone try

    - Partioning is something I can mention when I'm talking with the people of FileNet or EMC SourceOne.

    - Splitting the database and backups across multiple disks is something I'm also gone try in a test environment

    Regards,

    Marco

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply