March 8, 2011 at 3:29 pm
So I see a lot of advice on various sites to run a CHECKPOINT followed by a DBCC DROPCLEANBUFFERS when testing execution times. I have always taken this with a grain of salt because it really depends on what you are looking to test. The only real benefit I see here is if you are trying to check I/O subsystem performance. Clearing the buffer cache just guarantees that SQL Server needs to pull the requested data from disk before it can work with it. So all things considered, the real difference in execution time for a given query before (presuming all data needed was in RAM) and after CHECKPOINT/DBCC DROPCLEANBUFFERS is the amount of time it took to read the data back into memory. Basically an I/O subsystem test.
Clearing the buffer cache should have no real impact on the execution plan at all. Performance is really about the execution plan. How SQL Server deals with the data once it is in cache is all it has control over. The speed at which data is read into cache is dependent on the hardware environment.
Thoughts? Am I missing something?
March 8, 2011 at 4:50 pm
You're not really missing anything. Dropping the buffers basically gives you a worst-case scenario as to how your query will perform, and will test your I/O subsystem (assuming no I/O contention).
If you're comparing the performance of two different plans and you have enough data that it may not all be (or fit) in cache, it makes sense to clear the cache before executing to guarantee that each of your tests starts in the same state - with the same amount of cached data. The only way to guarantee this is to start cold.
If your data will easily fit in cache, and you're fairly confident no I/O will be necessary, such as a test system where you are the only user, then you can compare the two execution times with confidence and not worry about I/O.
March 8, 2011 at 5:29 pm
Jim has explained this well. It's mostly so you can get a valid comparison of two different sets of code that do the same thing. You can't count on these timings for your application, but they should give you an idea of the outer limits of what it would take to run the code.
March 8, 2011 at 9:13 pm
I agree. Jim explained it quite well. I'll also add that if you expect a high volume of hits against something that is likely to continue to exist in cache, it's almost a waste of time to clear cache during testing unless you just really want to know what the worst case execution time will be.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply