September 22, 2007 at 11:06 am
Hi,
I am working on Masters in Computer Science and would like to investigate strategies for query optimization that could be built into an agent. I've worked w/ SQL Server since 6.5 and have done some work w/ Oracle and MySQL. I've used the DTA in SQL Server 2005 and have been impressed with it.
Can anyone recommend articles on experimental strategies for evaluating query optimization suggestions? One issue I have is comparing performance statistically. Despite working w/ SQL Server for some time now, I am showing my ignorance about caching. If I run a query many times as a test, the performance improves. However, if I call the same query as part of a DTS package or from a GUI, the duration, CPU, reads and writes vary. What kind of testbed could I build to run query (and table index) variations through?
Thanks,
Bill
September 24, 2007 at 9:22 am
you have to understand the two elements which relate to what you're asking - well actually three I suppose.
Ist is if the data is in cache or has to be read from disk - I usually make sure data is in cache when tuning frequent queries as this is how it will be in production. The number of io ( page requests ) will be the same regardless - it's just a case of logical or physical - any physical will slow things down- so you should make you test all logical or all physical.
The other parts relate to the plan - the first use will put the plan into cache ( point 2 ) - this takes longer than re-using a cached plan ( point 3 )
so - is your plan re-used ? if so with data in cache , plan in cache and plan reused this should give optimum performance. Tuning a query is the same regardless of source. However simple queries may get flushed out so you need to check the relevent system views to make sure your query plan is in cache and being re-used.
set options ( connection string ) can negate the re-use of a plan and generate a further plan - then there's recompilation. Basically the same query should perform the same regardless of how it is presented.
If you're calling from clients then that may make a difference - you should always compare like for like.
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
September 24, 2007 at 9:27 am
There are two different strategies... one is the system... cache, I/O, memory, load balancing, splitting TempDB for each CPU, etc, etc, ad nausium.
The other is the techniques used in the actual queries themselves... and, they make the biggest difference in performance. Kinda like a car... you can change the engine from a 4 cylinder to a 12 cylinder fire breathing monster with the correct transmission, drive shaft, and differential to handle the increase in horsepower... but if you haven't fixed the flat tire, you're going nowhere fast. 😉
--Jeff Moden
Change is inevitable... Change for the better is not.
September 24, 2007 at 2:47 pm
I think there are good hints above, but one of the main things that you will want to start with is evaluating different optimizations (indexing, rewriting, etc.) with a clean cache. So before each test, DBCC FreeProccache to clean everything out of cache. Then test a query, rerun dbcc, etc.
This gives you a clean test of how the system performs. That isn't always appropriate as the caching from other queries in the buffer can help you and so it isn't always worth optimizing some queries since you can't necessarily speed the query substantially.
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply