Profiler and "reads" counter

  • I am doing some performance tuning on a SQL 2K installation. I have run a "worst performing queries" profiler trace on my database and found some queries that are taking in excess of 30 seconds to complete. The trace also shows the number of reads to be over 1 million.

    But when I run the same query against the same database in Query Analyser, the "server trace" shows only a couple of hunrded reads and the query completes in less than 1 second.

    Any ideas why the big discrepency in the "reads" trace data

  • This was removed by the editor as SPAM

  • I don't have a definitive answer, but we regularly see similar behaviour.

    I think it is caused because SQL Server caches the plan for a query dependent on the 'SET' options in force. Query Analyzer will probably have different 'SET' options to your normal application, and therefore SQL Server is creating and caching different plans.

    In our particular case I suspect that when SQL Server first cached the query plan, the main application ran the query with a particular set of parameters that caused SQL Server to generate a non-optimal plan. This cached plan is then used for subsequent queries with different parameters (even though they would benefit from the index).

    If I cut and paste the long running query into QA, it generates a new plan, that correctly uses the index, and runs in a fraction of the time.

    You can run DBCC FREEPROCCACHE to force SQL Server to drop all query plans and generate new ones. But this is a bit drastic as it drops all plans, even those that work fine.

    A more specific solution (and the one we use) is to look at the tables that those queries are using, and run UPDATE STATISTICS on those tables. This causes SQL Server to ditch any cached plans and create new ones where the queries use those tables.

Viewing 3 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic. Login to reply