Tips for reducing reads and cpu in stored procedures

  • I've gotten a lot of the performance issues resolved but there are still a few that I'm having problems finding. Based on the avg disk queue length pegging when certain stored procs are run I think its a disk bottleneck.

    A lot of the stored procedures that have huge read values (in the millions) and sometimes have high CPU (10k+ ms), but have a relatively low duration (100 - 15k) depending on the amount of data being returned.

    I've added some indexes and when the system isnt swamped the stored procedures run fine and fast regardless of the size of the record set returned (some are 120 - 140k rows (dont ask, its for auditing purposes)).

    What in the SP would i look for to reduce the number of reads and CPU. The durations are fine, the recordsets in some cases are small yet the CPU and reads are oddly high (considerig the indexes added)

    I'm thinking maybe the biggest culprit is the way I join my tables using the

    select

    column1, column2

    from

    Table1 tbl1

    ,Table2 tbl2

    where

    tbl1.id = tbl2.id

    instead of the inner/outer joins.

    Does that have a huge impact or either way is fine?

    any input would be appreciated.

    Frank

  • The two are equivalent, but you should convert to the inner join format. The old style joins are deprecated in SQL 2005 and will not work in future editions.

    Regarding the high reads, high CPU...

    How many rows are in the tables?

    Are there appropriate indexes for the joins and the filters?

    If you look at the execution plans, do you see any scans (table, clustered index, non clustered index)

    The disk queue length can be difficult to interpret, especially if you're using a SAN. This is partially due to the way SQL issues read requests, often issuing a number at once and to read-ahead reads.

    What do your sec/read and sec/write counters look like? The % disk idle time? Transfers/sec?

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (12/13/2007)


    The two are equivalent, but you should convert to the inner join format. The old style joins are deprecated in SQL 2005 and will not work in future editions.

    Regarding the high reads, high CPU...

    How many rows are in the tables?

    Are there appropriate indexes for the joins and the filters?

    If you look at the execution plans, do you see any scans (table, clustered index, non clustered index)

    The disk queue length can be difficult to interpret, especially if you're using a SAN. This is partially due to the way SQL issues read requests, often issuing a number at once and to read-ahead reads.

    What do your sec/read and sec/write counters look like? The % disk idle time? Transfers/sec?

    There are two major tables that have about 10 - 12 million rows each and some of the record sets for the reports can run about 100+k for the moe detailed ones.

    I've added a few indexes but I havent wanted to add too many based on an article i read that too many can impact performance of insert/updates and there are a ton of those a day. I've gotten the reports to run reasonably well based on the number of rows returned, but even on the ones that retrun fast, still have a oddly large number of reads, so that's what's making me wonder.

    The tables are all joined on the primary key of the table being joined (ie, MediaTypeID, which is an identity field)

    I've looked at the execution plans but I have to confess that I dont really understand them and havent really found (or had time to really search yet, too many fires) much information. I have seen them for the stored proces and have seen stuff like "cost: 80%" and scanning of non-clustered indexes but wasn't sure how to interpret it.

    As for the other stuff on the perfmon I only have the avg disk queue at the moment, but I'll also add that and see what that shows.

    Frank

  • You need to look for things like scans...both table and index scans. These are in the main bad news. Hopefully you are also not using CURSORS!!

    Best would be to post some SP's and matching execution plans and relevant table DDL and then maybe we can suggest some simple solutions as well as point out some lessons which will get you educated more in the black-art of "fine tuning SQL":D

  • Any code that needs the word DISTINCT in it, should be highly suspect as the source of a lot of reads... hidden RBAR...

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply