Stored procedure performance issues

  • I've been tasked with improving the performance of a colleagues stored procedure, and could really do with some advice to ensure that I don't lead them down the wrong path.

    1. The stored procedure creates a temp clustered index based on values from a user table-value function output (input for this function is one of the stored procedure's parameters).
    2. Once created the SP then Selects data making use of 4 joins with a sub-query (temp index) into another temp table. This temp table also gets a temp clustered index.
    3. A table variable is then declared and filled using select that makes use of 12 joins (I know..... also one of the joins is the created temp table in step 2) that also has a sub-query (same temp index as before) in the where clause to check if one of the fields is IN that sub-query and there is also an ORDER BY statement after the where clause. The results are then outputted into a report in the app. None of this data gets stored in a table.
    4. All temp tables are dropped.

    I have a few ideas in mind but I don't want to waste people's time if I end up choosing the wrong one. Any help on this is greatly appreciated.

    • This topic was modified 3 years, 10 months ago by  DBA on Route.
  • Without the code, structures, execution plans, I can only make vague suggestions.

    Sounds like we're moving a lot of data around that could simply be filtered within a query. That's not to say that breaking stuff down into smaller data sets through the use of temp tables isn't a valuable tool in the toolbox, but that, from the description, it may be overused. Especially concerning is the final query. If I understood what you said, it's loading stuff into a table variable and then returning data from that? Why not just run the query that loads the table variable and be done?

    In order to get more specific, I'd need more detail.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • I will add to what Grant has stated - this output is being sent to a report.  Using ORDER BY in the query is only useful if the report is using that order.  If the report changes the order or has its own defined order then sorting the data in SQL Server is just wasted time.

    Now - if that ORDER BY is where I think it is...this is most likely is not doing what is expected.  If that ORDER BY is on the select statement that is inserting the data into the table variable - then it is not guaranteed that the data will be sorted that way when selecting from the table variable.

     

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

  • I do agree to Grant and Jeffery. In addition I will check the indexes used in the execution plan. Check for the 'estimated' and 'actual' rows, which can tell if the query is using the correct index or not.

  • Not enough details to get very specific, obviously.

    However, this rules will always apply:

    1. Create the clus index on all temp tables before loading them.  There are techniques for doing this even if a table is created using a SELECT ... INTO (and is, in fact, even more critical then, since that methods holds locks on some internal structures and can cause delays).  If you need help with how best to create a clus index before loading, just let us know.  Use a multi-key clus key whenever it's applicable, do not worry about how long the clus key is if it matches your query requirements.

    You can probably use this in your proc, but it is more an "it depends" thing:

    2. Clus the tables on the joining column(s), especially if the join columns are the same on multiple tables.  For example, if you have #temp1, #temp2 and #temp3 and all use the same, for example, int columns to join, clus all the tables on that int first.  Say that one of those table also joins to another table based on that int and another column, clus that table on ( int, other_column).

    SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".

  • DBA on Route wrote:

    A table variable is then declared and filled

    We replaced Temp Tables with Table Variables when they first came out because we assumed they must be wonderful 🙂 ... then it came to light that large amounts of data didn't suit them and in some situations they were way more inefficient.

    Given that your job starts off with #TEMP tables, and then pops stuff into @TableVar ... it might be that the @TableVar bit is a bad choice (easy enough to change the TableVar to a Temp Table and see if that makes any difference) ... if the actual number of rows in the TableVar / Report output is small then it probably won't make a difference.

  • Further suggestion, hopefully the learned people here won't be horrified?

    SET STATISTICS IO ON; SET STATISTICS TIME ON

    ... statements to be tested here ...

    GO
    SET STATISTICS IO OFF; SET STATISTICS TIME OFF
    GO

    and then look for anything with high numbers of SCANS and LOGICAL READS (ignore all the Physical values) and try things with one eye on whether those numbers come down.

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply