March 26, 2012 at 4:33 pm
Confusing Queries (3/26/2012)
so, under what circumstances people consider flushing buffers?
In production just about never, and if it is done it's as an interim workaround while resolving whatever the root problem is.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
March 26, 2012 at 5:09 pm
Confusing Queries (3/26/2012)
GilaMonster (3/26/2012)
Confusing Queries (3/26/2012)
Is it the case we should consider flushing buffers?
No, not unless you want SQL to suddenly have to work much harder than normal.
so, under what circumstances people consider flushing buffers? or it is just mean to use for performance testing on development box.
On older editions of sql server (i.e. without optimize for ad hoc workloads option) there can be significant plan cache bloat from ORM usage, dynamic sql or other one-execution stuff in which case flushing the plan cache can help with memory issues. I have a client with OLD ADO Classic crappy code on SQL 2005 RTM that gets MASSIVE plan cache bloat. I run a job every 30 minutes around the clock to flush the plan cache. That is an exceptional edge case though brought on by numerous factors including not even having the SP fix for SQL 2005 that dramatically reduced the amount of memory the plan cache can grab.
But as Gail said, it is certainly not NORMALLY done on production boxes.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
March 27, 2012 at 1:51 am
TheSQLGuru (3/26/2012)
I have a client with OLD ADO Classic crappy code on SQL 2005 RTM that gets MASSIVE plan cache bloat. I run a job every 30 minutes around the clock to flush the plan cache. That is an exceptional edge case though brought on by numerous factors including not even having the SP fix for SQL 2005 that dramatically reduced the amount of memory the plan cache can grab.
And, I assume also brought on because they are not willing to fix/rewrite the crappy code. Even old ADO could be parameterised (though very few people knew how)
I also ran into that bug HARD, SQL app that used a lot of (parameterised) dynamic SQL. The parameterisation didn't help much because the column lists varied from execution to execution. Killed us when we upgraded to 2005 SP1. Was the fastest I ever saw that company apply a service pack.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
March 27, 2012 at 6:46 am
GilaMonster (3/27/2012)
TheSQLGuru (3/26/2012)
I have a client with OLD ADO Classic crappy code on SQL 2005 RTM that gets MASSIVE plan cache bloat. I run a job every 30 minutes around the clock to flush the plan cache. That is an exceptional edge case though brought on by numerous factors including not even having the SP fix for SQL 2005 that dramatically reduced the amount of memory the plan cache can grab.And, I assume also brought on because they are not willing to fix/rewrite the crappy code. Even old ADO could be parameterised (though very few people knew how)
I also ran into that bug HARD, SQL app that used a lot of (parameterised) dynamic SQL. The parameterisation didn't help much because the column lists varied from execution to execution. Killed us when we upgraded to 2005 SP1. Was the fastest I ever saw that company apply a service pack.
Yeah - this code was a massive legacy pile of crap (but VERY functional believe it or not) that did payroll processing. Way too risky to really mess with it. We refactored some of the most eggregious performers and I did some Guru magic for some of the remaining stuff - mostly with proper indexing. Also, this is one of only two clients I have ever come across where FORCED PARAMETERIZATION was indeed a magic bullet - 30% throughput gain on average for their very intense payroll closing process.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
March 27, 2012 at 10:20 am
where dose cache memory located? I mean it is on RAM or on HD
The question is weird as I am new to the computer science.
March 27, 2012 at 10:25 am
RAM stands for Random Access Memory, so when we talk about something in memory, with no qualifiers or specifics, it almost always means RAM.
Edit: Clarified for Mr Pedantic below. 🙂 :hehe:
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
March 27, 2012 at 10:38 am
Ooh ooh. So what does ROM mean? 😀
March 27, 2012 at 10:43 am
Robert Davis (3/27/2012)
Ooh ooh. So what does ROM mean? 😀
Read Only Memory, which is certainly not what people are referring to when they use the general term Memory. 😉
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
March 27, 2012 at 10:49 am
Robert Davis (3/27/2012)
Ooh ooh. So what does ROM mean? 😀
Is this a real question? If so, Read Only Memory.
Do I get a prize?? :hehe:
March 27, 2012 at 10:57 am
No, wasn't a real question. Was just playing along. 🙂
March 27, 2012 at 11:40 am
It means when the code dealing with the amount of data having memory space required more than RAM we get physical reads...am I right?
March 27, 2012 at 11:42 am
Huh? Don't understand.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
March 27, 2012 at 11:45 am
lets say the script has to deal with 7GB of data and the RAM is 4GB. Server has to read pages involved for those 3GB of data from the disk
March 27, 2012 at 11:47 am
The Dark Passenger (3/27/2012)
It means when the code dealing with the amount of data having memory space required more than RAM we get physical reads...am I right?
I think we are having a language barrier here.
I think you are asking this: "If the space of the requested data is greater than the amount of memory that we have, we get physical reads?"
The answer is that ALL data is read from disk into RAM first. If the page is already in RAM, then it does not have to be read again. All changes to data are written to RAM then to disk. So maybe your question is: "What happens when the requested data is too large to fit into memory?"
Am I on the right track?
Jared
CE - Microsoft
March 27, 2012 at 11:50 am
The Dark Passenger (3/27/2012)
lets say the script has to deal with 7GB of data and the RAM is 4GB. Server has to read pages involved for those 3GB of data from the disk
It'll have to read any of the 7GB that isn't already in cache from disk.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 15 posts - 31 through 45 (of 45 total)
You must be logged in to reply to this topic. Login to reply