Sql Server 2000 Std using only 25% CPU

  • Hi all,

    I hope this is a really simple issue - even when under very high load - tons of inserts, selects, updates etc at the same time - my new SQL Server 2000 dual cpu (dual core) machine does not hit over 25% for sqlservr.exe. I assume that it is only using one core of one cpu but as we spent a few thousand on a new machine but are only seeing a small increase in performance (compared to the spec increase anyway) this is a problem I would love to solve!

    If SQL Server 2000 can only use one CPU core then I would appreciate some upgrade path advice - can SQL Server 2005 better use the hardware?

    New box is 4gb ram, Dual Xeon 5130 (2.0Ghz), three raids, mirrored(o/s), mirrored(logs) and raid5(data).

    Thanks!

    -

    Peter

  • It's possible that you are bottlenecked on IO (likely) or memory, and hence the CPU can't get to full utilisation. In gneral, you don't want to see your CPU more than about 75%-80% utilisation over long periods.

    What are the avg values of the following perfmon counters?

    Physical disk: avg sec\read

    Physical disk: avg sec\write

    Physical disk: % idle time

    memory: Pages\sec

    memory: Available memory (MB)

    Processor: kernal %

    Processor: user %

    Processor: context switches\sec

    SQL buffer manger: Buffer cache ht ratio

    Also, run profiler for a while and see if you have lots of high read or high write queries, see if you have any that have a high duration when compared to CPU time.

    Hardware very seldom gives a large boost in performance, unless your previous server was very over-utilised. Generally the best performance impovements come from code optimisation, followed by index tuning.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (6/30/2008)


    It's possible that you are bottlenecked on IO (likely) or memory, and hence the CPU can't get to full utilisation. In gneral, you don't want to see your CPU more than about 75%-80% utilisation over long periods.

    What are the avg values of the following perfmon counters?

    Physical disk: avg sec\read

    Physical disk: avg sec\write

    Physical disk: % idle time

    memory: Pages\sec

    memory: Available memory (MB)

    Processor: kernal %

    Processor: user %

    Processor: context switches\sec

    SQL buffer manger: Buffer cache ht ratio

    Memory\Available MBytes

    1940.533647

    Memory\Pages/sec

    0.823158459

    PhysicalDisk(_Total)\% Idle Time

    93.1214583

    PhysicalDisk(_Total)\Avg. Disk sec/Read

    0.002550294

    PhysicalDisk(_Total)\Avg. Disk sec/Write

    0.008001548

    Processor(_Total)\% User Time

    3.379542191

    SQLServer:Buffer Manager\Buffer cache hit ratio

    97.81703896

    These were taken over a mix of very quiet, quiet and busy periods (that's all we seem to get!), about 2 hours worth.

    I couldn't find Kernal% or context switches/sec...

    I will set up a profiler today to run during tomorrow.

    Thanks for your help and any suggestions you can give from this info.

    -

    Peter

  • From the looks of the stats you posted, you have a very well-running SQL system that is on hardware more powerful than it currently needs.

    Are you having performance problems, or are you just concerned that the hardware doesn't appear to be fully utilised?

    This must be the first time I've seem someone posting here saying that their CPUs aren't utilised enough. Normally its the other way round.

    p.s. kernal% is actually privileged time%

    Ignore the request for context switches. I was thinking of something else.

    Edit: Make sure that you don't have a CPU affinity set in the Server properties.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Hi Gail,

    We have moments of terrible performance - time out errors, MS Access hanging etc and general slow query run time and I wanted to make sure that as much power was available as possible.

    I guess the next step is to run the same trace when the performance is as above?

    If I can get it I will post it!

    Thanks for all your help!

    -

    Peter

  • Does the poor performance occur at regular intervals or times?

    Best thing would be to run profiler and perfmon during one of the slow periods. The profiler events you want are Stored Procedure:RPC completed and T-SQL:BatchCompleted. Make sure you capture the textdata, the duration, cpu and reads. If you're worried about the impact of profiler, set up a server-side trace (profiler can generate the script) and write the trace to a drive that is not used by your databases

    Perfmon's fairly light on resources, unless you're capturing hundreds of values every second. I would suggest you set up a counter log to log the perfmon counters I listed above to a file every minute or so. Run th trace for a couple of days and save to a csv file.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Also, during those slow times, query sysprocesses and look for what the wait times and last wait types are.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • It seems to happen at certain times of day (which is probably due to automated processes like mass imports/updates) but also sometimes seemingly randomly (I say seemingly because I'm sure there is a reason!) once a day or so. Some days it can be three or four times for about 5-10 minutes.

    I have set Perfmon counters to run for 3 days but 3 days worth of profiler is going to be a lot of data! Would it ruin the data if I set a duration > 1second filter?

    The sysprocesses table has 270 or so rows and about 35 of these have high waittimes (around 20,000,000 to 1,500,000) with type NETWORKIO.

    These traces etc are getting to the limit of my experience so I hope I'm giving the right information back!

    Once the perfmon has run I can post up a summary, the profiler and sysprocesses table however I don't know how to analyse.

    Thanks!

    -

    Peter

  • Hi All,

    i facing almost the same problem, but here i use sql 2000 enterprise with ibm x3650 server.

    i use dual core xeon 1.7 gigs with physical memory 4 gigs

    every time the finance dept start process, the sql took only one core processing. the other core is unused. the used processor almost all the time takes 85 to 95% all the time. some times half hour sometimes faster (20 min)

    is there any tweak that can be done at the sql so the sql can use the both core of the processor???

    any suggestion would be appreciate

    regards

    JUN

  • Hi All,

    i facing almost the same problem, but here i use sql 2000 enterprise with ibm x3650 server.

    i use dual core xeon 1.7 gigs with physical memory 4 gigs

    every time the finance dept start process, the sql took only one core processing. the other core is unused. the used processor almost all the time takes 85 to 95% all the time. some times half hour sometimes faster (20 min)

    is there any tweak that can be done at the sql so the sql can use the both core of the processor???

    any suggestion would be appreciate

    regards

    JUN

  • Peter Curd (7/1/2008)


    I have set Perfmon counters to run for 3 days but 3 days worth of profiler is going to be a lot of data! Would it ruin the data if I set a duration > 1second filter?

    You can, though it may be better to set up a server-side trace in a job, and start the job when your server starts showing signs of a slow down. It does require you to be watching, but it will avoid having several days of (mostly useless) profiler data.

    The sysprocesses table has 270 or so rows and about 35 of these have high waittimes (around 20,000,000 to 1,500,000) with type NETWORKIO.

    *Blink* Wait times of up to 5 and a half hours? (wait times are in milliseconds) That's scary. What network speed do you have there? Check the network cards and see what their speed and duplex settings are. Same for any switches you have on the network.

    These traces etc are getting to the limit of my experience so I hope I'm giving the right information back!

    Look on the bright side. You're learning a lot. 😀

    Once the perfmon has run I can post up a summary, the profiler and sysprocesses table however I don't know how to analyse.

    I will help you where I can. Once you have some profiler data I can offer some pointers on how to analyse it.

    You can attach files to your posts. If you want (and your company policy allows), copy the results of a query of sysprocesses into a spreadsheet, zip and attach. Probably best if you do that during one of your slow periods.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • kentangki (7/1/2008)


    Hi All,

    i facing almost the same problem, but here i use sql 2000 enterprise with ibm x3650 server.

    i use dual core xeon 1.7 gigs with physical memory 4 gigs

    Please start your own thread so as to not hijack this one.

    Thanks

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • sorry,

    i dont mean to hi jack this thread, just i thought the problem almost the same.

  • GilaMonster (7/1/2008)


    Peter Curd (7/1/2008)


    I have set Perfmon counters to run for 3 days but 3 days worth of profiler is going to be a lot of data! Would it ruin the data if I set a duration > 1second filter?

    You can, though it may be better to set up a server-side trace in a job, and start the job when your server starts showing signs of a slow down. It does require you to be watching, but it will avoid having several days of (mostly useless) profiler data.

    I have the code exported - I have been dragged off of SQL Admin for a few days so I will attempt to write a painful query (unless you can suggest something?) and add the trace to it. As I'm away from site I won't know when it is slowing down today but hopefully can do it tomorrow.

    The sysprocesses table has 270 or so rows and about 35 of these have high waittimes (around 20,000,000 to 1,500,000) with type NETWORKIO.

    *Blink* Wait times of up to 5 and a half hours? (wait times are in milliseconds) That's scary. What network speed do you have there? Check the network cards and see what their speed and duplex settings are. Same for any switches you have on the network.

    I have attached an excel sheet with a partial SysProcesses export - just those with high waittimes - with obfuscated servernames and usernames if you could look at it - I really don't know what it all means. The backbone network between servers is 1gb, 100mb to the workstations/laptops. Network card (it has only one socket in use) is at 1gb Auto Duplex. Switches are pretty much all 3Com SuperStack IIIs - as far as I can see without going into the server room and tracing cables, all using Auto Duplex too.

    These traces etc are getting to the limit of my experience so I hope I'm giving the right information back!

    Look on the bright side. You're learning a lot. 😀

    Very true! And I'm happy to be learning!

    Once the perfmon has run I can post up a summary, the profiler and sysprocesses table however I don't know how to analyse.

    I will help you where I can. Once you have some profiler data I can offer some pointers on how to analyse it.

    You can attach files to your posts. If you want (and your company policy allows), copy the results of a query of sysprocesses into a spreadsheet, zip and attach. Probably best if you do that during one of your slow periods.

    This leads on from the earlier point - I'll post once I'm able to run the trace.

    Thanks!

    -

    Peter

    (Edited for clarity)

  • I seem to be experiencing a similar problem. The server I am using has 8 core (i.e 2 X Quad Core Processors) and running SQL 2005 standard. The closest I came to a resolution is the fact that we have one primary mdf file. It was suggested that possibly if we added secondary ndf file and equivalent number ldf it may solve the problem.

    Has anyone tried this out or sorted the issue.

Viewing 15 posts - 1 through 15 (of 15 total)

You must be logged in to reply to this topic. Login to reply