Possible issues with refreshing a table 4 times per minute

  • Hi *,

    I'm hoping this is the right place to post this. I'm performance tuning a stored procedure which takes data from a lot of tables, takes it through some business processing and "data crunching" and outputs a result set that is used as a base for other stored procedures. I've manged to take it down from 9 minutes to 3.5 seconds (sounds good), but the business team wants to take it up to 1 second max.

    For this, I'm thinking of persisting the result set of this procedure in a table and refreshing that table every 15 seconds. Currently the procedure outputs around 1000 records with 12 columns. Although the data isn't very likely to change completely between 2 executions, there might be records which have column values changed between executions.

    Since my client is still using a 2008R2 box I can't use any in-memory capabilities to speed things up.

    I'm trying to think of what might go wrong, if anything, with doing such a refresh of this table. I'll always truncate the table and then re-insert. Any ideas / concerns or alternatives to doing this are very much welcome!

    For performance tuning I've used temp tables where meaningful, indexed base tables accordingly, forced and parallelism where it could speed things up.

    In the end the breakdown of those 3.5 seconds is 4 queries taking an average of 0.5 - 0.6 seconds each and 51 more queries taking up the remaining 1 second. So, I'm not sure if I can make this faster from query tuning, which is why I'm thinking of refreshing this constantly.

    I've also thought about clustered indexed views, but since there's a lot of 'data crunching' doing in the procedure, I don't see any way I can make use of this.

    So, this is where I'm at right now. In conclusion, I'm hoping to find help with:

    1) concerns regarding refreshing a persisted table so often (recovery model is FULL)

    or

        2) other possible improvements for query tuning to take me under the 1 second threshold (without going to the hardware change / upgrade route)

    Thank you very much in advance! (for reading this and replying)

    ~ Just some guy trying to tune queries ~

  • "In the end the breakdown of those 3.5 seconds is 4 queries taking an average of 0.5 - 0.6 seconds each"

    Can you post the execution plans of these four queries? It's quite likely that someone around here could shave another couple of seconds off.
    Actual plans as opposed to estimated, as .sqlplan attachments.

    “Write the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

    For fast, accurate and documented assistance in answering your questions, please read this article.
    Understanding and using APPLY, (I) and (II) Paul White
    Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden

  • I've attached the actual execution plan containing the top 6 most time time-consuming queries and an Excel file containing more information about these queries taken from a couple of DMV's with the mention that in the Excel file, the query highlighted in yellow is a parallel query and which is why the CPU time is much greater than the elapsed time.

    The queries I initially tuned are still the top ones in the Excel file, but now they are 0.6 seconds instead of a couple of minutes, initially.

    Edit:

    I've noticed after posting the reply and taking another look at the plans that there was a RID Lookup for my fourth query, which executed for ~53k rows. I've made the improvement and added that column to an appropriate index, but it only shaved 0.1 seconds off the execution time of the query.

    ~ Just some guy trying to tune queries ~

Viewing 3 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic. Login to reply