Difference in READ between Profiler and STATISTICS IO

  • Okay folks I have an interesting issue that I hope someone can shed some light on. I have a fairly complex query that has several Outer Joins and some derived tables. To try to simplify the queries I have broken out the parts into separate inserts into table variables (currently dealing with very few rows 1 in most cases). When I run the query with table variables in with STATISTICS IO ON and then add up the Logical Reads I get 42, but Profiler is reporting 403 Reads.

    If I do it in one big long query I get 36 Logical Reads from STATISTICS IO, and, if the query is in cache Profiler returns 36 as well.

    I'm assuming it has something to do with Table Variables, but I can't find anything anywhere on it.

    I'm attaching the queries (will be encapsulated in an SP when done) and the table creates with data included. I'd just post the queries, but they are very long and my preference is not to have to scroll a couple of hundred lines.

    Any ideas are welcome.

  • I'm pretty sure this one goes way back and may not be related to the table variables. I've run into it before. Do you think this is the issue?

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • It definitely seems like this could be it. That begs the question, which one do you use for performance tuning? I mean the difference between the queries in Profiler is huge (400+), but in STATISTICS IO it is 6. I can live with 6 for the easier readability, but 400 is crazy! FYI, the stats in sys.dm_exec_query_stats are different too! Although they do match the #'s in Profiler for the stmtcompleted event. Of course when you add up the numbers for the stmtcompleted events you still don't match the batchcompleted events numbers.

    You know things like this make me wish I hadn't learned about performance tuning. 5 years ago I would have written the code, been happy when it returned the right results, and then moved on. Now I thought I had an idea about doing it better and can't figure out what performance numbers to use!:unsure:

  • You do what I do, use the same number all the time. When I'm troubelshooting from a point of view, say SSMS, I do all the measures there. If you bounce between the two, you just get crazy with different times, different CPU, different I/O... It's not fun.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • There is a Connect issue on this. Pleas vote.

    https://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=387322


    N 56°04'39.16"
    E 12°55'05.25"

  • Thanks Peso, but that Connect Issue is a different issue. Mine is that Profiler and Statistics IO report a different # of READS when looking at the same query. It looks like the link Grant provided in his first post is the issue.

    Grant,

    Yeah I'd use the same one, but if query 1 really had 400+ fewer reads than query 2 as Profiler says then I'd want to use Query 1, but if Statistics IO is more accurate I'd want to use Query 2 for readability because they are close enough.

  • When it's really down to the wire, I usually rely on the numbers from Profiler. I honestly don't know if that's right or not, but it's a leap of faith in either direction as far as I know.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • I trust the Profiler more too, even if it's an external tool. Or, that's why I trust it.

    I have come across situations where the Profiler reported about 100,000 reads and STATISTICS IO reported 200 reads for the actual tables, and 0 reads and 0 scans for the internal worktable. It doesn't make sense. Why create a worktable and don't use it?


    N 56°04'39.16"
    E 12°55'05.25"

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply