December 6, 2011 at 1:44 pm
I have a stored procedure that when viewed in profiler/trace base lines at about 20k reads. I have found that it will start performing 1.6million reads at some point during the day and every execution for the next hour or 2 will execute 1.6 million reads.
I did a quick test - executed prior to restart SQL instance - 1.6 million reads
Executed after restart of SQL instance - back to 20k reads
With my limited experience - this seems to eliminate indexes, lack of indexes, poorly written query, network traffic, and hard drive issues. It seems to point the blame on the state of the SQL server instance. I should point out that if I don't do something it will eventually reporting 20k reads.
Other stored procedures don't seem to be effected during this "phase".
December 6, 2011 at 2:38 pm
Well, with the info provided there really isn't much we can do except add additional guesses to the cause:
Bad execution plan for the given parameters to the procedure
Out of date statitistics
If you are having a performance problem, please take the time to read the 2nd article I reference below in my signature block regarding getting help with performance problems. Following the instructions in that article regarding what you need to post and how to do it. With that information I am sure you will get better answers than mine.
December 6, 2011 at 9:35 pm
DoubleEx (12/6/2011)
Executed after restart of SQL instance - back to 20k reads
Since that's the case, it sounds like a "bad" cached execution plan and the first thing I'd recommend is to try adding the RECOMPILE option to the stored proc.
--Jeff Moden
Change is inevitable... Change for the better is not.
December 7, 2011 at 12:09 am
Sounds like a "parameter sniffing" issue , use the search engine of your choice for that phrase
December 7, 2011 at 5:09 am
DoubleEx (12/6/2011)
I have a stored procedure that when viewed in profiler/trace base lines at about 20k reads. I have found that it will start performing 1.6million reads at some point during the day and every execution for the next hour or 2 will execute 1.6 million reads.I did a quick test - executed prior to restart SQL instance - 1.6 million reads
Executed after restart of SQL instance - back to 20k reads
With my limited experience - this seems to eliminate indexes, lack of indexes, poorly written query, network traffic, and hard drive issues. It seems to point the blame on the state of the SQL server instance. I should point out that if I don't do something it will eventually reporting 20k reads.
Other stored procedures don't seem to be effected during this "phase".
Nope. It sounds like an issue with statistics which is directly related to indexes, lack of indexes and/or poorly written queries. It could be bad parameter sniffing. It could just be aging statistics, possibly combined with bad index choices or bad code. Get the execution plan when it's running well and when it's running badly. That will let you know what's happening and help you determine what you need to do to fix it.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
December 8, 2011 at 8:35 am
I will go with parameter sniffing too. BTW, next time you see this you do NOT have to bounce the box to clear it!! You can run this:
sp_recompile [ @objname = ] 'object'
to recompile the offending sproc. You can also run:
dbcc freeproccache
to clear ALL cached sproc plans. Note that perf could suffer for a bit while everything is recompiled if you use the sledgehammer approach!
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply