March 6, 2014 at 8:41 am
I seem to be going round the houses on this one, reading ancient threads, blogs that offer conflicting viewpoints as to the usefulness of even considering looking a LIO , forum posts with members declaring "I have high logical reads how do I solve this " , BOL etc
Well the thing is I know what they are , I understand how to "resolve" them
The thing that continues to puzzle me is this
What constitutes " high" logical reads , to look at is as an abstract number I can see my prod database has x amount for a given time ..but is that good or bad ?
When someone says that have a high number of logical reads for a sproc what give them to understand that number is indeed high ?
I can use a baseline and see it has increased by x % over a given period ... but without a measure of what would constitute an acceptable rise that is again meangingless (at least to me) ?
Words of wisdom on a postcard (or forum post) would be great
March 6, 2014 at 9:07 am
"It depends"
This would be specific to your environment, your databases, size, volume of data being moved around, etc.
At our shop, I typically look for anything over 250,000 reads. I look closely at the procedure to determine what may be causing the reads, determine if it can be optimized (typically it's a bad JOIN, implicit conversions, keylookups, etc.) but sometimes it's just a giant query needed by an application that returns 800+ columns, and I have no way to really optimize it. Then again some queries have high reads which run rather quickly (unusual, but does happen), and vice-versa. The ones that also have a long duration typically are the ones I begin looking deeper at.
We have some queries that are as high as 550,000,000 reads :crazy:
I recommend you keep baselining over time, to really determine what's "bad" for your server, then create some threshold that suits your needs.
______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience
March 7, 2014 at 2:31 am
MyDoggieJessie (3/6/2014)
"It depends"This would be specific to your environment, your databases, size, volume of data being moved around, etc.
At our shop, I typically look for anything over 250,000 reads. I look closely at the procedure to determine what may be causing the reads, determine if it can be optimized (typically it's a bad JOIN, implicit conversions, keylookups, etc.) but sometimes it's just a giant query needed by an application that returns 800+ columns, and I have no way to really optimize it. Then again some queries have high reads which run rather quickly (unusual, but does happen), and vice-versa. The ones that also have a long duration typically are the ones I begin looking deeper at.
We have some queries that are as high as 550,000,000 reads :crazy:
I recommend you keep baselining over time, to really determine what's "bad" for your server, then create some threshold that suits your needs.
thank you 🙂 so if it depends on those various factors I guess baselineing overtime is the best way to do it like you say You mention a "ballpark" of an IO figure per query ...
Do you have a ballpark youd start at for total LIO per server or for a level of percentage increase where youd start to think ouchie
many thanks
March 7, 2014 at 2:38 am
simon_s (3/7/2014)
Do you have a ballpark youd start at for total LIO per server or for a level of percentage increase where youd start to think ouchie
Depends on the server, the application it supports, the data growth, etc.
There's no value which makes me say something's a problem. It's all in relation to normal for that server and what all the other queries are doing.
If, for example there's one query doing 1000000 reads per execution, it executes frequently and the next highest query is 10000, then I'm going to look at the first one and see what it's doing and whether it can be tuned. If the first query ran once a month or if more than half of the queries on the server did around the same number of reads I might not even notice it.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
March 7, 2014 at 7:57 am
thank you both much appreciated
March 7, 2014 at 10:15 am
No problemo! Hope it helps get you going in the right direction 🙂
______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply