CPU, Buffers, and Cache oh my!

  • Based on the snapshot a couple of questions,

    Can anyone tell me about the Cache Hit Ratio on Procedure Plans, I have around 51%. What sort of hit ratio should I see?

    How can Buffer Cache Hit Ratio be over 100%?

    CPU is rampant, at times it's 20-30 percent utilize(Norm), other times like here, it's in the 40's, and yet at other times at 90+, haven't pinned down the cause. Could a DTC process do that?

    There's 42 user dbs, average around 200+ users, with 1-2 logins/sec, although I've seen this hit 7,10/sec

    Any other thoughts?

    I know there's to many Adhoc SQL Plans and I have the dev team on it. In particular one SP was being treated as an Adhoc query.

    Object: Processor0123

    % Processor Time26.59248.45846.89659.391

    Object: SQLServer:Buffer Manager

    Buffer Cache Hit Ratio100.069

    Cache Size (pages)14218.000

    Page Requests/sec53249.294

    Object: SQLServer:Cache Manager

    Adhoc SQ Prepared Procedure

    0099.443 0095.218 0051.019 Cache Hit Ratio

    2286.000 5423.000 1174.000 Cache Object Counts

    2435.000 6783.000 4678.000 Cache Pages

    John Zacharkan


    John Zacharkan

  • Whatever you are seeing is fine there is no need for you to worry about.You should start worring if the chache hit ratio falls below 85% and CPU touches 100% and maintains there for longer time,than you should consider adding more memory/processors etc.

  • Not sure how buffer cache can go over 100%. Have seen other %'s, like CPU, etc go over when there are multiple instances.

    CPU will fluctuate, especially with large queries. No governer for users. Watch the average. 90% isn't bad for short times, like a query. An issue when it impacts multiple people.

    That's quite a few logins/sec. You'd have to determine if this is an issue fro your environment and see if you can do something with the apps.

    Steve Jones

    sjones@sqlservercentral.com

    http://www.sqlservercentral.com/columnists/sjones

  • The Buffer Cache thing is a misreport, I forget if this SP with the OS or SQL but I believe it is a known issue.

    CPU will always vary but when starts running over 50% consistantly you should run Profiler and see what is in execution at those times. Otheriwse it may be hard to pinpoint.

    The question about logins can aslo be related to CPU busy and unable to process connection as fast. I would worry about this if the CPU utilization is low. If high then look for the issue with CPU utilization first. Also, depending on how your app is built, if in house make sure the connections are left open until they absolutely will not be used within the next few minutes. Also, read up on connection pooling to make sure the connections are pooled instead of closed for faster access. If closing and opening often then it will cause times to extend periodically. Now for the ADHOC, not sure without know what the SP is doing.

  • Thanks guys,

    I have to work on the connections, there does seem to be a relation to high CPU activity and high number of logins/sec. It's a web app written in VB ASP COM ADO. I also have a few B2B links to Oracle and DB2 so DTC is running, plus client installed admin tools written in VB and god knows what.

    How about the cache hit ratio for procedure plans being 5O% What's that about and where should it be?

    The Adhocs I've got under control.

    John Zacharkan


    John Zacharkan

  • Not sure but usually that is a sign of not enough memory. The cache hits should be over 90%. All this means is that it was able to find some of the data in memory without having to physically load from the drive copy.

  • The cache hit ratio could also be low due to stored procedure recompiles. Case in point: a SQL Server with 2.5 GB of memory only using about 1.4 GB with a cache hit ratio of 43 %. Reason? Nearly every stored procedure was being recompiled. The main reason was the use of sp_ as the prefix for the name. This will always result in an initial Cache Miss unless the sproc is in the master database. A stored procedure recompile will result in the execution plan being removed from cache which means when it's executed again, you'll get an initial cache miss. So any stored procedure recompile can drive this number down.

    DTS can cause high CPU utilization. When you see extended periods of CPU utilization 85% or above, it's time to start looking at the processor as a potential bottleneck. Then start looking at Processor Queue Length and Server Work Queues\Queue Length.

    K. Brian Kelley

    http://www.truthsolutions.com/

    Author: Start to Finish Guide to SQL Server Performance Monitoring

    http://www.netimpress.com/shop/product.asp?ProductID=NI-SQL1

    K. Brian Kelley
    @kbriankelley

  • Factor in that for Ad-Hoc statements, when an execution plan is compiled the plan is saved to cache along with a cost factor associated with it. This value is set to 0 for an ad-hoc statement. A 0 cost means that the plan can be immediately dropped. (Laxywriter sweep, etc.)

    Factor this along with any "sp_..." created procedures in user databses and it will be a low cache hit ratio - especially if there is low memory.

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply