Query Performance, possible bad statistics/query plan?

  • More info:

    I rebuilt statistics on the three tables referenced in the original post.

    I recreated a duplicate proc except without assigning the parameters to local variables. Both queries take 7+minutes to run.

    In both cases the estimated rows is still about 1/7 of actual number of rows. I've recompiled the proc, I've run the proc with 'with recompile', I've used sp_recompile with the table names that had the statistics rebuilt.... all the same result.

    What the above tells me is I'm starting guess, which isn't helping me be logical and definitive in my approach to solving this.

    Crud.

  • Try using the 'OPTIMIZE FOR' Query hint just to see if that will create a good plan.

    Is there something 'unusual' about the IDContract value ?



    Clear Sky SQL
    My Blog[/url]

  • I tried, nothing worth noting changed. There's nothing abnormal about the IDContract value.

    When I run IO statistics I get the following lines which seem troubling: [lob truncated as it's all zeros anyway] (Bolding mine)

    Table 'Worktable'. Scan count 7116008, logical reads 53304197, physical reads 0, read-ahead reads 0

    Table 'tbldEncntr'. Scan count 2, logical reads 29960, physical reads 53, read-ahead reads 14975

    Table 'tbldAdj'. Scan count 1, logical reads 72430, physical reads 166, read-ahead reads 72407

    Table 'tbllAdjCd'. Scan count 1, logical reads 8, physical reads 2, read-ahead reads 5

    Table 'tbldChrg'. Scan count 4, logical reads 123750, physical reads 189, read-ahead reads 61863

    Table 'tbllSite'. Scan count 2, logical reads 8, physical reads 0, read-ahead reads 0

    Table 'tbllSiteGrp'. Scan count 3, logical reads 6, physical reads 0, read-ahead reads 0

    Table 'tbldPaym'. Scan count 1, logical reads 78286, physical reads 197, read-ahead reads 78262

    Table 'tblsSitePaycl_Perf'. Scan count 1, logical reads 138, physical reads 5, read-ahead reads 134

    That's off the charts high. I assume work tables are where data is accumulated and sorted?

  • Ack!! I just looked at the code you posted. Cursors! Have you looked at rewriting the cursor-based code as set-based?

    And, no, I haven't starting looking at the code in-depth, that is just what jumped out at me at first glance.

    Edit: A second glance, and I do have to say this, at least it is well formatted and easily read. But still, there is extensive use of cursors and this may easily be what is killing your performance. There is a lot RBAR (A Modenism for Row By Agonizing Row) processing going on.

  • Lynn,

    I don't know if I'd go so far as to say 'extensive', but yes there are two cursors used in the code. I know they need to be removed, however, also please keep in mind that:

    1) This is old code (read: I didn't author it, nor is it a query that has never run correctly) In this case I've picked one proc out of many where performance has fallen through the floor since an upgrade to sql 2008. This particular proc would run in 2 minutes pre upgrade, and now takes 7-8 minutes post upgrade.

    3) The cursors in this case aren't likely the cause of the problems I'm experiencing. I say this for two reasons: 1) This same code worked more efficiently before the 2008 upgrade. and 2) In 99% of the cases where this proc is run the loop value used in the cursor contains only one value.

    Now that being said, I'm going to create a test version of the proc without cursors and see how that works.

    Thanks,

    Bob McC

  • We had a performance issue with our PeopleSoft Finance application when we upgraded from SQL Server 2000 to SQL Server 2005. Long story short, there was a table used in this one query pulling back Open PO's that had no indexes defined. The application had no problems in SQL Server 2000, but we started experiencing application timeouts under SQL Server 2005. Once we identified the table and properly indexed it, problem gone. It took weeks before we discovered this.

    Have you checked that all the indexes are appropriate? Things could change between versions and what worked under SQL Server 2000 may not work the same under SQL Server 2008.

  • Also seeing the use of table variables. Do you know on average how many rows of data may be inserted into each of these tables? As the number of rows increases the chances of getting a poor performing execution plan increases as the Query Optimizer assumes that the tables variables only have one row even if they contain several thousand.

  • I'm wondering if properly indexed temp tables in place of the table variables might not be a solution without converting the cursors? I'd move the creation of the temp tables and indexes to top of the stored procedure. Then you would just need to change the insert and select statements to use the temp tables instead of table variables.

    Just a suggestion.

Viewing 8 posts - 16 through 22 (of 22 total)

You must be logged in to reply to this topic. Login to reply