7934149330 reads Vs 20 Writes - 20 executions?

  • I see from dashboard reports for top 10 queries based on Avg CPU and my first query is doing a delete from a temp table by selecting data across couple different tables doing few joins. I see #Avg Logical Reads are 7934149330 and # Avg Logical Writes are 20 with 20 executions.. Does that sound strange?

  • It only sounds like you got some tuning to do on that query!!!

    With only 1 write per run, I wouldn't worry about it.

  • Actually it is 20 writes and 7934149300 reads for one execution. In any scenario would it be normal for

    7934149300 reads and 20 writes.

  • There's nothing normal in SQL server.

    It would be abnormal if the avg is 1M reads and then all of a sudden, it jumps 1000 folds to 1 Billion reads.

    You certainly have a ton of reads and need to investigate the cause further...

  • Seems like its always been the same..i will further investigate..

  • sqldba_icon (1/18/2011)


    Seems like its always been the same..i will further investigate..

    With that many reads, it is possible that the query has an "accidental" Cross-Join in it in the form of a many-to-many join simply do to either a lack of criteria or a lack of properly joined columns in the query. Look at the arrows in the execution plan. Do any of them have more rows listed than what is in the related table?

    Also, there's the possible problem of "Halloweening". Take a gander at the second link in my signature line below on how to post enough information for us to help on this problem and we'll take a look.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • sqldba_icon (1/18/2011)


    Actually it is 20 writes and 7934149300 reads for one execution. In any scenario would it be normal for

    7934149300 reads and 20 writes.

    It all depends on the query and the dataset.

    My opinion without knowing those things is if this is an update query you should add more filters and a non-clustered index on the column being updated and the columns in the where clause.

    If there are any table joins in the query make sure the columns being joined have an index in both tables and if possible set up a primary to foreign key relationship in the join.

    It is know how to make these exact determinations that make a DBA worthwhile.

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply