February 7, 2007 at 1:45 pm
Is there a standard benchmark that I could run on my SQL servers and compare with others? I need a sanity check on the dev and prod servers I currently maintain. I am interested how these servers stack up against one another or against anyone else who wants to compare theirs.
--Frank
February 8, 2007 at 2:46 am
all sql servers are different. I use benchmarks but set this on a per server basis. You could buy the sql server 2000 performance tuning ref isbn 0-7356-1270-6, this has lots of useful info.
Your performance is relative to the load/applications/config and you should establish a baseline for each server - once you have a baseline you can compare loads, attempt to scale etc. etc.
To compare hardware you can use various testing tools, but to test with reagrd to the apps you use is much more difficult. At a client site we had a stress environment - wasn't hardware compatible with prod - I did benchmarking and there were vast differences in stats - I never did really establish a similarity.
So what do you want to achieve? Perhaps we can help there .. I've used 1 to 16 way boxes, SAN, DAS, little memory, lots of memory, cluster, non cluster, little dbs, busy dbs - can't say there's any valid comparisions I can think of.
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
February 10, 2007 at 4:18 pm
Colin,
You kind of answered my question, but it just seems a bit funky that no conclusions can be drawn...for example. If you had two identical servers - one with 1 CPU and one with 8, I would think that if an application could use more CPU's it would be proportionally faster. I just received the perf book from Amazon...I will review that for a while.
Another thing I was going to try for fun was to take a production backup, restore on a dev box, capture a few hours worth of prod traces and them play them back on the dev server and see what happens.
--Frank
February 12, 2007 at 3:02 am
I developed a very simple empirical approach for one of my clients.
It consisted of a Windows scheduled task which fires up from one server (not the SQL Server)
every even hour and from another server (not the SQL Server) every odd hour.
The task fires up, connects to a database on the SQL Server that you want to monitor and
runs 3 benchmarks that measure three criteria: Cpu, Disk, Network.
How this is done is too long to explain here. But it essentially is a combination of
common sense and trying things out and monitoring the machine to see where you are hitting it.
After that you create standard measures for each criteria expressed in minutes of elapsed time.
In my case it was 0.27 min. for Cpu, 0.25 min. for Disk, 1.9 min. for Network.
When any of these criteria would be exceeded 3 times their value an alert would get fired.
February 12, 2007 at 4:23 am
technically you're sort of right about the cpu's except, a dual core adds about 0.6 for the second core, or a dual core is about 1.6-1.8 of two actual single procs .. however it's all relative. For the procs to be useful they have to be loaded and then sometimes extra procs/cores degrade performance due to parallelism where you don't want it.
Disks are easier as they have distinct theoretical values of io and throughput down to raid and spindles - but of course if you have enough memory for the database to live in ram then the disks only count for write activity - I could go on and on - but it's not as easy at that.
I used to use the time a backup and a restore took of a database to compare performance overall, e.g. if my 100Gb database backups up in 25% of the time of the old box on the new box then new box = 4 times faster.. simplistic but at least you have a consistant benchmark ( assumes editions are the same ). Obviously this is strongly affected by disk sub system.
What we tried to do once before was to run and time a series of application tasks as a user, the problem is that it's often tricky to get elapsed time from within an application, I guess a distinctly named user running application tasks every hour and then profiled might work.
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
February 12, 2007 at 10:05 am
SQLStress/SQLIO are used for disk benchmarking but MS, as far as I know, does not have a CPU benchmarking tool. However with a little creativity, operfmon and some free software you can probably come up with something just for CPUs. Check out this site and the readme. This software is for computing prime numbers but it has a special section that may very well interest you.
http://www.mersenne.org/freesoft.htm
RegardsRudy KomacsarSenior Database Administrator"Ave Caesar! - Morituri te salutamus."
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply