June 14, 2006 at 9:53 am
We use a tool that uses perfmon to monitor performance and when certain threshholds are exceeded, it starts emailing errors.
These are the stats it is complaining about.. I'm not sure what the problem might be... we did have to reboot the DB server this morning because is kinda died and we had to rejoin it to the Domain.. not sure if the two are related.... Any idea? does anyone see anything in these statistics below alarming?
Cache Hit Ratio = 63.89%, Cache Pages = 17286, Cache Object Counts = 9153, Cache Use Counts/sec = 10.76/sec, Data File(s) Size (KB) = 8776576, Log File(s) Size (KB) = 6733640, Log File(s) Used Size (KB) = 1901535, Percent Log Used = 28, Active Transactions = 0, Transactions/sec = 2.33/sec, Repl. Pending Xacts = 0, Repl. Trans. Rate = 0.00/sec, Log Cache Reads/sec = 6.61/sec, Log Cache Hit Ratio = 50.95%, Bulk Copy Rows/sec = 0.00/sec, Bulk Copy Throughput/sec = 0.00/sec, Backup/Restore Throughput/sec = 0.00/sec, DBCC Logical Scan Bytes/sec = 0.00/sec, Shrink Data Movement Bytes/sec = 0.00/sec, Log Flushes/sec = 0.95/sec
June 14, 2006 at 11:17 am
Yes - your Cache Hit Ratio is way too low. This should be 95% or better, especially if you're only asking it to service 2 transactions per second. The fact that it isn't means that far too much data is being pushed out of memory back to disk in order that fresh data can be read in again. How much memory does your server have? Your database is 8.7 Gb, so there's a very good chance that lack of memory is your problem.
A big cause of this can be inefficient code - code that uses lots of table/index scans instead of index seeks (i.e. it has to manipulate far more data than it needs, and all this is done in memory)
June 14, 2006 at 11:21 am
Thank you. I have 3 GB of memory in this Server. Should be sufficient.. SQL Server will only use up to 2GB (I think)
June 15, 2006 at 2:26 am
Why do you say that 2Gb should be sufficient? With a database that size, and a cache hit ratio that low, I'd say that it clearly isn't sufficient. You're right that SQLServer STANDARD edition will only use up to 2Gb, so if that's what you've got, then you're stuck with it (even though it isn't sufficient! ) If you've got Enterprise edition, then SQLServer can use up to 3 Gb by adding the /3Gb switch in the boot.ini file, or up to 64Gb if you use /AWE (obviously after you've increased the on-board memory accordingly first!)
I'd strongly suggest analyzing your code to look for tablescans / indexscans. These will prevent pages persisting in cache for long, which is why your cache hit ratio is low (pages that ought to be in memory aren't found there because they've been pushed out by large volumes of data pages resulting from over-large, inefficient joins). Use Profiler and look for queries with a high number of READS - difficult to pick a starting figure, but >10,000 should be a good threshold.
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply