what affects the time it takes to create an index?

  • I have an ETL process for which the processing time is varying wildly and with no apparent cause. I am looking at a graph of the amounts of time it is taking to create indexes on the ETL'd table to see if they are the cause of the performance differences and I see the duration on each individual index spiking (what seems like) randomly. They don't spike on the same days, or even the same days of the week. Can anyone tell me the kinds of things that impact how long it takes for an index to complete? For example, tempdb size? I want to see if these factors are affecting these indexes at different times. If there are other pieces of information that you need to help provide the framework for an answer, please let me know. BTW, the indexes are created in the same order each time. The table has 118 million rows.

  • dbodell (9/20/2016)


    I have an ETL process for which the processing time is varying wildly and with no apparent cause. I am looking at a graph of the amounts of time it is taking to create indexes on the ETL'd table to see if they are the cause of the performance differences and I see the duration on each individual index spiking (what seems like) randomly. They don't spike on the same days, or even the same days of the week. Can anyone tell me the kinds of things that impact how long it takes for an index to complete? For example, tempdb size? I want to see if these factors are affecting these indexes at different times. If there are other pieces of information that you need to help provide the framework for an answer, please let me know. BTW, the indexes are created in the same order each time. The table has 118 million rows.

    Are you dropping and recreating the index every time the job runs?

    Are you truncating and repopulating the data every time the job runs, or simply appending new data to the existing rows?

    The absence of evidence is not evidence of absence
    - Martin Rees
    The absence of consumable DDL, sample data and desired results is, however, evidence of the absence of my response
    - Phil Parkin

  • I am dropping and recreating the indexes each time. I am using MERGE to update and insert records. Hope that helps.

  • dbodell (9/20/2016)


    I am dropping and recreating the indexes each time. I am using MERGE to update and insert records. Hope that helps.

    If you are dropping the index before doing the MERGE, that's a huge potential overhead. Have you tried doing the MERGE without dropping and creating the index?

    The absence of evidence is not evidence of absence
    - Martin Rees
    The absence of consumable DDL, sample data and desired results is, however, evidence of the absence of my response
    - Phil Parkin

  • You know that is an interesting comment because I recently changed the code to use MERGE instead of separate INSERT and UPDATE processes. Removing and rebuilding the indexes was pretty much required for performance, with 118 million records in the table and there being 4 non-clustered indexes over and above the primary key. Is there a difference in how MERGE works that would make it worse to drop and rebuild the indexes? If so, that could be key here to overall improvement. But if not, do you have any tips on the kinds of resources that would affect index building (e.g. tempdb being too small, not enough cpu, etc.)? I know it is a very broad question so I'm not looking for very specific situations.

  • dbodell (9/20/2016)


    You know that is an interesting comment because I recently changed the code to use MERGE instead of separate INSERT and UPDATE processes. Removing and rebuilding the indexes was pretty much required for performance, with 118 million records in the table and there being 4 non-clustered indexes over and above the primary key. Is there a difference in how MERGE works that would make it worse to drop and rebuild the indexes? If so, that could be key here to overall improvement. But if not, do you have any tips on the kinds of resources that would affect index building (e.g. tempdb being too small, not enough cpu, etc.)? I know it is a very broad question so I'm not looking for very specific situations.

    Your MERGE statement is likely to perform more slowly than separate, well-crafted INSERT and UPDATE statements.

    MERGE also has some bugs (link[/url]). I'd suggest that you consider reverting to INSERT/UPDATE.

    Your INSERT and UPDATE queries should be tuned separately. In order to help with that, we'd need to see the queries, the actual execution plans and the DDL for the underlying tables.

    The absence of evidence is not evidence of absence
    - Martin Rees
    The absence of consumable DDL, sample data and desired results is, however, evidence of the absence of my response
    - Phil Parkin

  • dbodell (9/20/2016)


    You know that is an interesting comment because I recently changed the code to use MERGE instead of separate INSERT and UPDATE processes. Removing and rebuilding the indexes was pretty much required for performance, with 118 million records in the table and there being 4 non-clustered indexes over and above the primary key. Is there a difference in how MERGE works that would make it worse to drop and rebuild the indexes? If so, that could be key here to overall improvement. But if not, do you have any tips on the kinds of resources that would affect index building (e.g. tempdb being too small, not enough cpu, etc.)? I know it is a very broad question so I'm not looking for very specific situations.

    Note that doing an INSERT & UPDATE is often faster than a MERGE. Note this article:

    Performance of the SQL MERGE vs. INSERT/UPDATE[/url].

    You can review the actual execution plan and the IO statistics of an index build/update just like any other query. You can also examine things like memory pressure, CPU pressure and parallelism using STATISTICS IO and capturing the execution plans using Extended events. Sometimes you have tempdb spillover which means the data has to be copied to disk because there's not enough memory to perform the sort. This is the kind of information that will lead you in the right direction.

    "I cant stress enough the importance of switching from a sequential files mindset to set-based thinking. After you make the switch, you can spend your time tuning and optimizing your queries instead of maintaining lengthy, poor-performing code."

    -- Itzik Ben-Gan 2001

  • Whenever a process (whether it be a bulk insert, select, create index, or whatever) occasionally requires significantly longer duration to complete, it's usually due to blocking. When CREATE INDEX is running in ONLINE = OFF mode (the default), it will hold an exclusive Schema modification (Sch-M) lock on the table and hold it for the duration of the operation. The default ONLINE = OFF mode is faster, assuming it can gain an exclusive hold on the table. However, if it can't acquire this exclusive lock on the table, perhaps because another process is holding a shared query or update lock, then it will remain in a blocked state before it can continue. Alternately, if you CREATE INDEX with the ONLINE = ON option, then it will only hold Intent Share (IS) lock on the table.

    You can confirm the issue by using SP_WHO2 while the create index operation is running to determine if it is stuck in blocked state, or you can start a trace to monitor blocking events.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Thanks for the info. Yeah, before I did the MERGE replacement, I did a lot of research on the difference and I saw the disadvantages and bugs. I did a test on a development server and the ETL was so significantly faster that I thought I would try it out for a few days. In general the performance in production has been better, contrary to what I thought it would be, using MERGE. I'm actually pretty good at tuning the individual processes and have, in general, gotten to execution plans that look great. But the variability is what is killing me. On one day it will finish by 7am and the next day take 2 hours longer. I've got traces running, I put a timestamp table out to record the start and end times of each part of the process and I have graphed it out by date and step to see if there are any correlations between long times and days of the week, etc. Of course I need to know what other processes are running, so I have our dba running snapshots into a table SQLMONITORDBA that shows locks, wait stats, what is running, duration, etc. I don't see anything jumping out at me that would delay that specific process. None of the locks are on tables that it uses nor are they even at the time it is running. I do have lots of CXPACKET waits, but I think that is due to multiple threads being processed, which is a good thing, right? I don't know which type of CXPACKET wait they are.

    I very much appreciate your time and discussion on this. I will almost certainly go back to the UPDATE/INSERT method and hope things improve.

    But that wouldn't affect the index builds...

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply