May 3, 2013 at 7:13 am
I have a supplier who insists on having two jobs that every couple of hours change the Max Memory setting of the SQL Server instance. Basically one job sets the memory to 45Mb and the next back up to 46Mb each one running in turn every two hours. The supplier insists that without this "Memory Poke" some or all writes to the database start to take longer.
I can see that every time the maximum memory setting is changed, the procedure cache is flushed out of memory, but I can't find anything that would suggest why regular changing of the max memory setting could influence database writes.
Our monitoring software shows a Procedure Cache Hit Ratio of around 85% on average. A recent disabling of the jobs that change the memory setting didn't seem to push this higher.
Anybody seen this elsewhere or got any suggestions as to where to look for an explanation; or even it can't have that effect? The supplier just wants it running "in case".
May 3, 2013 at 12:44 pm
This is the strangest thing I have heard. I never have heard anything like that before. But I have 1 question is it 45mb or GB?
May 3, 2013 at 1:08 pm
Never heard of such a thing. Sounds like someone was under pressure to put something, anything, out there as a solution to a problem they could not solve, but what an odd thing to have decided to do. I would do some observational-based testing and look to remove those jobs.
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
May 3, 2013 at 1:30 pm
i was thinking the same as opc.three;
it's a false cause-effect i bet;
i'm thinking that procedures started sucking-performance-wise because of stale statistics, and the clearing of the proc cache, which occurs when you change the memory, solves the problem indirectly by forcing some new plans to be created.
That's my first guess.
Lowell
May 3, 2013 at 1:52 pm
I'll call BS on that. Changing the max memory setting is not going to affect SQL writes.
Could be that you have bad plans getting into cache (maybe problematic plan caching), but there are way better ways of fixing that than changing a setting that clears the cache as a side effect.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 7, 2013 at 2:21 am
Sorry, my mistake GB of memory not Mb.
May 8, 2013 at 11:45 am
Identify a specific INSERT or UPDATE statement is believed to benefit from this cache flushing. It's possible that the actual I/O isn't slow, but rather the execution plan is stale. Perhaps there is a complex SELECT / JOIN associated with the INSERT or UPDATE. You can specify RECOMPILE hint on stored procedure or individual statements to force recompile and creating fresh plan each time it is executed. That's a common thing to do for long running procedures.
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply