June 30, 2003 at 2:25 pm
Sometimes my cpu utilization is running close to 100%. (2 cpus). When I monitored thru perfmon:
cache hit ratio-- 99%
Process \Page faults\sec for SQLserver --almost 100
processor\%processor time-- above 90
System\Processor queue length -- above 4
Memory\pages per sec -- above 20
Howcome when my cache hit is 99% and page fault is also high? page.sys files are in c&E drive. all data & trans are in E drive.(which is not good). Do I need to make the page.sys file to 0mb in E drive and move transaction log to C drive? Users complaining that the system is slow. I dont see any locks, except it is a high volume transaction traffic. Any help greatly appreciated.
June 30, 2003 at 4:48 pm
I'd say you probably need more memory. The page swapping is drive up your CPU load. Check drive IO while you're looking, it's probably high as well. All things considered, more memory never hurts and is easier to add than drive space.
Andy
June 30, 2003 at 5:08 pm
Often my first action in such circumstances is to run profiler interactively (locally if possible) to capture SQL with a CPU > zero. I tend to run it for 15-30 mins, or for 20,000-50,000 lines, whichever comes first.
Saving it to a development table for analysis thru SQL queries I've come across a query that did a table scan of a relative small table (eg. 10,000 rows). Although its individual CPU was too small to really stand out (eg. 25ms), the SQL analysis showed it had been called 13,000 times during the profiling period and accounted for 85% of the overall CPU. A quick index creation fixed the problem.
Of course, this may not be the problem in your case, but profiler is just an option to explore.
Cheers,
- Mark
July 1, 2003 at 8:18 am
Hi I've been having the same problem on a 4 CPU system. Two things I found:
1) We have a heavily used SP that returns varying number of rows, sometimes just a 1-5 and sometimes 50+. MicroSoft suggests adding the with recompile to the SP. corrected the problem. (sorry, don't know the article # in MS)
2) Another problem we were having is what I consider a "virus". There was a .pif file that just had a lot of text in it. When anyone would browse the folder the CPU's would max out. We sent the file to several virus companies and no virus could be found. I think someone just exploited a bug in MS.
I am still getting CPU spikes, but not as often and am working on the problem.
Curtis Smith
Application Dev. Manager
Curtis Smith
SQL Server DBA
Well in worked in Theory ...
July 1, 2003 at 9:52 am
Thanks for your replys. I am going to work on it. It has all the issues you have specified.
1. I have 2gb ram, which is not sufficient for this high volume traffic. Every 15 minute transaction dump around 8 to 14mb.
2. I have used profiler to track all sql & stored procedures (is this enough?) and going to do the analysis.
3. We use lots of views with no indexes on them , and i need to work on schema binding etc.
4. I searched the system for *.pif files, and I found them. Every half hour, I run a batch job which zips the logs and moves it to another storage(server) in the network. This file is in the winzip directory, a shortcut. Should I remove it?
July 1, 2003 at 9:46 pm
I usually start with RPC:Completed and SQL:BatchCompleted, and get more detailed thereafter only if necessary.
Cheers,
- Mark
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply