October 24, 2006 at 4:52 am
Good day all,
Just a general question, what is the recommended level for the "% Processor Time - _Total"?
We are doing some capacity planning for a new system, and basically the average is around 40%, the low side of it being around 35% and the high side being 46%. This is based on a 1:40 PerfMon session.
Basics of the box are HP ML570, dual processor (hyperthreaded), 2GBs memory, RAID5.
We have a new box that we are planning to implement, but that is only after the go live date of the new system.
New box is HP ML570 (newest one), dual processor (dual core, upgradable to 4), 16GBs memory, RAID10.
We also continually looking at ways of improving all our applications, indexing, stored procedure re-writes, stats jobs everyday, etc.
Any ideas?
Thanks in advance,
Graham
October 24, 2006 at 6:56 am
Good question. The average of 40% is not too much of an issue at all.
However, rather than a 1:40 perfomon trace, I would suggest setting up a perfmon counter trace and monitor for a day. You can then capture data at the busiest times of the day.
However, I would monitor various aspects of your environment to get an idea of potential hot spots. You may want to consider monitoring disk queue length, memory, cpu etc.
October 24, 2006 at 8:48 am
I do currently monitor all the other things, disk queue hardly gets to 2 on a 3 disk RAID5.
Memory is an issue, as we currently have Page Life Expectancy getting dropped all the time. This is due to the fact that the server was originally setup on Standard Edition, both OS and SQL Server. There are plans to change this to SQL 2000 Enterprise and Enterprise 2003 OS.
Basically the 40% is capturing at a sample rate of 1 second, therefore the full PerfMon screen will average over 1:40 minutes:seconds, but that average continues for while the new system is active. The original figure, before the new system is around 15 - 18%.
Basically just trying to find out what is a good number to be at... Very hard to choose I know... Its basically that the new box is future proofing as well.
Graham
October 24, 2006 at 9:06 am
It's really like guessing how long a piece of string is!
So many parameters to consider into settling on your own cpu threshold. Things like size of the database, tables being accessed, correct indexing, active connections, disk io times, buffer hit cache ratio, transactions per minute or second.
40%, as I said, isn't that high, but our system for example, pushes through 100k+ transactions a minute and it quite happily handles all of that in 5% or under.
We do get spikes but then we can tie that into high disk io queries when we look at a profiler trace, for example.
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply