November 11, 2003 at 10:15 pm
Hi All,
We are testing a new application that executes a series of stored procedures from ASP pages. Out testing involves 60/120/240 user connections executing the same number of stored procedures all within one second.
The problem I have is that we are experiencing 100% CPU utilisation whenever we test. Memory is perfect (4GB), Paging is virtually non existent, barely any Disk I/O and Buffer Cache Hit Ratio is always 99% or better. I am not seeing any significant locking either.
Does anyone have any ideas?
Thanks
Angela
November 12, 2003 at 5:23 am
That sounds very normal for a DB server that is bottlenecked on its processor(s), which is not a bad place to bottleneck. If you want to bottleneck somewhere else, you could either downgrade other components (e.g. remove memory) or upgrade the processor component; i.e. replace with faster and/or more processors.
If you are not getting the required throughput, the first thing to examine is actually the database, particularly the stored procedures, schema, and indexes. If you cannot improve their efficiency, then you may indeed need to change the hardware.
--Jonathan
--Jonathan
November 12, 2003 at 7:42 am
This could very well be an indexing problem.
I worked with a clustered instance that had a similar issue: 8 CPUs maxing out, but memory and disk usage seemed fine. After adding an index to a fairly large table, the problem went away.
JM
November 12, 2003 at 1:22 pm
Same line of thought.Had a situation where a report was run every couple of minutes.
Fortunately I knew who ran it without running any kind of trace.On looking closely you could tell the main table/columns used had no indexes on them.
It could very well be something else in your case but the chances of a lack of index are likely
Mike
November 13, 2003 at 7:47 am
That's a good thought. Run a trace in Profiler to see what is going on.
Dr. Peter Venkman: Generally you don't see that kind of behavior in a major appliance.
Patrick
Quand on parle du loup, on en voit la queue
November 19, 2003 at 11:21 pm
well I investigated a bit more and added an new clustered index to a table. Everything runs quicker now, but I still have 100% CPU across 4 hyperthreading processors (4 physical/8 virtual) I also have the same problem on the production server. BTW I have just walked onto this site and it seems the previous DBA just ignored the problem and hoped it would go away
This test server is a brand new machine with 1.5Ghz Xeon processors.
Edited by - AngelaBut on 11/19/2003 11:23:42 PM
November 20, 2003 at 5:35 am
Need to narrow it down. Look for the proc that has the highest CPU utilization, then see if it makes sense and what you can do to reduce it.
Andy
November 20, 2003 at 1:28 pm
Yes, things like 100% CPU utilization will go away if you ignore them long enough.
Dr. Peter Venkman: Generally you don't see that kind of behavior in a major appliance.
Patrick
Quand on parle du loup, on en voit la queue
November 20, 2003 at 3:28 pm
In profiler open a trace with the default template. Remove the groups on the right in the events tab except for what is already there for Stored Procedures and TSQL. Under the Data Columns tab remove the CPU field from the right, then re-add it under the Group 'folder'.
Sometimes when I have trouble working with the functionality within the profiler itself, I'll not group by anything but just make it save to a table. Then query the table where the CPU vs. time to execute is highest (in your case).
Edited by - cmore on 11/20/2003 3:28:28 PM
November 21, 2003 at 7:22 am
When I use Profiler and save it to a table I do not use another database on the same server. That might affect the performance. I typically save it to a local database on my workstation.
That also allows me to run the Index Tuning Wizard on my box and not affect the other users on the server.
Dr. Peter Venkman: Generally you don't see that kind of behavior in a major appliance.
Patrick
Quand on parle du loup, on en voit la queue
November 21, 2003 at 8:32 am
Oof yeah forgot about that little factor. I also use a local db- one i named sandbox that i putz in with junk stuff all of the time.
November 25, 2003 at 3:49 pm
You might want to experiment using
with recompile
in the stored procedures.
We experienced the same problem and we needed this clause in a few of the Stored Procs hitting large dynamic tables.
Viewing 12 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply