Reindexing VLDB is Causing Massive Replication Changes to DR Site - Question

  • What software and/or processes do people follow when reindexing a 250GB+ index? When our weekend reindex job runs against a 750GB+ database it generates 500GB+ of changed data that must be replicated to our DR site. The replication is performed using a product called Double-Take. Every weekend the mirror fails because it cannot keep up with the amount of changes in a short period of time.

    What products do you use to replicate VLDB database changes to DR sites?

    What is the replication impact of reindexing a 250GB+ index?

    Thanks, Dave

  • You could try changing the recovery model (if you operational policy allows it).

    If you do change the model then be sure to backup the db (after you set back to FULL) once the maint tasks have completed. Otherwise you could loose your recovery chain.

  • Unfortunately that's too risky for this system.

    Thanks

  • sql_lock (12/7/2009)


    You could try changing the recovery model (if you operational policy allows it).

    If you do change the model then be sure to backup the db (after you set back to FULL) once the maint tasks have completed. Otherwise you could loose your recovery chain.

    In a mirrored database, the primary database must be in Full recovery mode.

    http://technet.microsoft.com/en-us/library/cc917680.aspx

    Doing this would cause the mirror to fail and thus continue the same problem described - forced mirror failure when trying to defrag indexes.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • Are you running manual scripts to rebuild indexes or the standard MS plan?

    If you your using a script you could try setting the "SORT_IN_TEMPDB" (If your recovery model is not set to full on tempdb)

    Doing this to improve the rebuild time you must make sure that tempdb is on different disks so to reduce the diskio on the data disks. Also make sure tempdb is sized correctly.

    A good ms article gives you some further recommendations

    http://msdn.microsoft.com/en-us/library/ms188281.aspx

  • DBADave (12/7/2009)


    What software and/or processes do people follow when reindexing a 250GB+ index? When our weekend reindex job runs against a 750GB+ database it generates 500GB+ of changed data that must be replicated to our DR site. The replication is performed using a product called Double-Take. Every weekend the mirror fails because it cannot keep up with the amount of changes in a short period of time.

    What products do you use to replicate VLDB database changes to DR sites?

    What is the replication impact of reindexing a 250GB+ index?

    Thanks, Dave

    750GB Database

    250GB Index (1 index that is 250GB of the 750GB DB?)

    A common practice is to use a script that will determine the frag % for each index. If the index is x% then reorg, if y% then rebuild, else do nothing. It is also a practice to perform this maintenance on a subset of tables (table by table, or group of tables, or all tables at once). Have you tried to break this down so it performs the big table individually and then the others on a separate schedule?

    Also, as suggested, you might try to do the sort in tempdb option.

    Also, with the size of that index/table, is there any chance of normalizing that table to reduce impact for index defrags?

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • Yes. One 250GB clustered index. It's a financial system used in the trading industry so we have no control over the design.

    SORT_IN_TEMPDB is used, but that only works with REBUILD. Our script does the standard set of checks.

    IF @Avg_Fragmentation_In_Percent between 10.0 and 30.0 And @Page_Count > 100

    REORGANIZE

    ELSE

    IF @Avg_Fragmentation_In_Percent > 30.0 And @Page_Count > 100

    REBUILD

    We considered always using REBUILD on the large indexes, so as to take advantage of SORT_IN_TEMPDB. We're going to give that another try in our test environment because neither of us can recall why we pulled the plug on that option when we attempted it about 4 - 5 months ago. We know at one time there wasn't enough tempdb space, but we've increased the size to 400GB so hopefully that should help.

    A Microsoft engineer suggested we consider setting the database to BULK_LOGGED prior to reindexing, saying that several very large and well known companies use this approach. That option was too risky for us to consider.

    Thanks, Dave

  • DBADave (12/7/2009)


    Yes. One 250GB clustered index. It's a financial system used in the trading industry so we have no control over the design.

    SORT_IN_TEMPDB is used, but that only works with REBUILD. Our script does the standard set of checks.

    IF @Avg_Fragmentation_In_Percent between 10.0 and 30.0 And @Page_Count > 100

    REORGANIZE

    ELSE

    IF @Avg_Fragmentation_In_Percent > 30.0 And @Page_Count > 100

    REBUILD

    ...

    Thanks, Dave

    Out of curiosity, are you seeing index defragmentation/ performance gains at the 100 page check level that you use?

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • Regarding page counts, we haven't checked to see what benefits, if any, we are getting from defragmenting an index with a low number of pages such as 100. That would be a bit tricky to track with this system given the number of objects it contains. We used to use 500 or even 1000 as the low-end page count, but I read an article by Kalen Delaney earlier this year abour how heavy fragmentation in a table with a small number of pages could lead to performance issues, depending upon the situation. I believe it had to do with joining with much larger tables and how the highly fragmented small table could slow down the JOIN. Don't quote me on that. I'll see if I can locate the article.

    Dave

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply