March 4, 2014 at 12:49 pm
I've noticed that in SSMS, running a complex query or query on large tables often returns data in much less time the second time you run it. I'm guessing this is because the second run uses data pages which were loaded into memory during the first query run rather then reading them from disk, which is slower.
For the purposes of optimizing queries & indexes, is there a way to force the second run of a query to be done in the same manner as the first run, so that I know changes in speed are due to adjustments I've done and not because data pages were pre-loaded to memory by the earlier query run? Or should I just use the "Cost" figure in the Execution Plans and pay less attention to the actual times?
March 4, 2014 at 1:17 pm
Not recommended for use in production, but used for dev/test you run:
to clear the cache to simulate having to read everything from disk.
Jack Corbett
Consultant - Straight Path Solutions
Check out these links on how to get faster and more accurate answers:
Forum Etiquette: How to post data/code on a forum to get the best help
Need an Answer? Actually, No ... You Need a Question
March 4, 2014 at 1:31 pm
Cost can help you determine if the query has been tuned.
In addition, you could use statistics time and statistics IO.
Examining the execution plan can really help to determine if the query is executing better than previously.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
March 4, 2014 at 1:33 pm
You might also want to account for getting the execution plan from cache too. You can use this to clear the proc cache dbcc freeproccache
.
It is good to compare apples to apples, but I tend to test both with a cold cache and a warm cache (since that is how it will be in production, hopefully). I like to compare the number of reads as well as the time it takes, so I use SET STATISTICS IO ON and SET STATISTICS TIME ON.
March 4, 2014 at 1:35 pm
Keith Tate (3/4/2014)
You might also want to account for getting the execution plan from cache too. You can use this to clear the proc cachedbcc freeproccache
.It is good to compare apples to apples, but I tend to test both with a cold cache and a warm cache (since that is how it will be in production, hopefully). I like to compare the number of reads as well as the time it takes, so I use SET STATISTICS IO ON and SET STATISTICS TIME ON.
This is another one that I would be careful about running in prod. I will use this occasionally in a dev environment.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
March 4, 2014 at 1:41 pm
March 5, 2014 at 3:19 am
My preference is to run queries/procs I'm testing multiple times and ignore the durations/IOs of the first result, rather than clearing plan and data cache after every single execution.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
March 12, 2014 at 8:02 am
GilaMonster (3/5/2014)
My preference is to run queries/procs I'm testing multiple times and ignore the durations/IOs of the first result, rather than clearing plan and data cache after every single execution.
+1 🙂
March 12, 2014 at 9:01 pm
SQLRNNR (3/4/2014)
Cost can help you determine if the query has been tuned.In addition, you could use statistics time and statistics IO.
Examining the execution plan can really help to determine if the query is executing better than previously.
Oh, be careful now. "Cost" is just an estimate and is frequently way off compared to what actually happens. Using "Statistics TIME, IO" will also be way off if there are any non iTVF functions in the mix.
--Jeff Moden
Change is inevitable... Change for the better is not.
March 13, 2014 at 12:39 am
Jeff Moden (3/12/2014)
Using "Statistics TIME, IO" will also be way off if there are any non iTVF functions in the mix.
IO will be. Time, if you look at the final one for the entire batch, is correct.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
March 13, 2014 at 8:25 am
To aid, you can also use:
DBCC FREEPROCCACHE(plan_handle) - this will only remove the plan for the specific thing you're running and will not flush the entire cache.
Or, if it's a procedure being executed, you could add "WITH RECOMPILE" - so an entirely new execution plan is generated each time. I would only use this for testing, and would opt to remove it before making it a production-like procedure.
Or, simply execute sp_recompile 'schema.ObjectName' - which will accomplish the same thing.
______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience
March 13, 2014 at 8:31 am
MyDoggieJessie (3/13/2014)
Or, if it's a procedure being executed, you could add "WITH RECOMPILE" - so an entirely new execution plan is generated each time. I would only use this for testing, and would opt to remove it before making it a production-like procedure.Or, simply execute sp_recompile 'schema.ObjectName' - which will accomplish the same thing.
These don't quite do the same thing.
EXECUTE MyProc WITH RECOMPILE generates a new execution plan for that compile, the execution plan generated is discarded after the execution. If there was a plan already in cache it's left there.
exec sp_recompile 'MyProc' removes the plan for MyProc from cache so the next execution has to generate a new plan, cache it and then execute
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
March 13, 2014 at 8:37 am
Thanks for clarifying that Gail, I see the difference!
______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience
March 13, 2014 at 8:53 am
GilaMonster (3/13/2014)
Jeff Moden (3/12/2014)
Using "Statistics TIME, IO" will also be way off if there are any non iTVF functions in the mix.IO will be. Time, if you look at the final one for the entire batch, is correct.
I guess it depends on your definition of "correct". The use of statistics skews how long it takes for non iTVF functions to run a whole lot. You can do some simple tests from the code in the following article. When a measuring tool changes the outcome, that's "incorrect" to me.
http://www.sqlservercentral.com/articles/T-SQL/91724/
--Jeff Moden
Change is inevitable... Change for the better is not.
March 14, 2014 at 2:34 pm
Jeff Moden (3/13/2014)
GilaMonster (3/13/2014)
Jeff Moden (3/12/2014)
Using "Statistics TIME, IO" will also be way off if there are any non iTVF functions in the mix.IO will be. Time, if you look at the final one for the entire batch, is correct.
I guess it depends on your definition of "correct".
Correct in the sense that it isn't way too low, like stats IO is (where the reads done by the UDF aren't displayed), but yeah, observer effect can hit pretty badly with them, probably because of the massive number of batches which get run and must be tracked.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 15 posts - 1 through 14 (of 14 total)
You must be logged in to reply to this topic. Login to reply