February 1, 2007 at 7:38 am
We have recently rebuilt our test operations batch server as an instance on a vmware box. The instance has been assigned 1 processor and 1gb of memory.
I was asked to test performance and I am not a hardware kind of person so I really have no idea what to do. The only methodology I can come up with is running a dts that does various tasks; creating a db, creating a table, bcp 1 million records, create indexes on the new table, run some intense sql against the table, do a checkdb and finally do a rebuild index. I was going to run this dts with the 1 processor and 1gb of memory and then increase each and measure the time. I was also going to run numerous similar dts at the same time with various processor and memory configurations.
SO, is this methodology even worthwhile or am I totally off base? Can it be improved? Does anyone have any suggestions?
Thanx,
February 1, 2007 at 8:18 am
- If you have no baseline or reference, how would you define "performant" ?
Start with defining your expectations (or references to the "old" server ) regarding the time your planned operations took and will take.
- I think ms shipped a stresstest-tool but I cannot recall it right now.
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
February 1, 2007 at 8:51 am
You are not off base for the long run, but before going thru the effort of writing the tools for the methodology, there are some simple tools available from Microsoft that can be run.
http://support.microsoft.com/default.aspx?scid=kb;EN-US;231619
How to use the SQLIOSim utility to simulate SQL Server activity on a disk subsystem.
SQLIOSim replaces the SQLIOStress utility.
I have saved the benchmark results of SQLIOStress for 2 identical servers with the only difference being RAID-1 versus RAID-5. The benchmark was also run versus a VMWare server(in 2001), but the results were so bad, I did not bother to save them. I could not find SQLIOStress on the MS site, but I do have a copy, so advise if you need this tool.
ModelProLiant DL380 G3
Memory Kb2621440
CPU Count2
CPU ModelP4 Xeon
CPU Mhz2,400
Disk Controller TypeUltra3
Read Cache Kb49152
Write Cache Kb0
Disk Capacity GB36
Disk RPM 15K
Disk Quantity6
The results were with RAID-1 followed by RAID-5. For the RAID-1, the OS, db mdf and db ldf were on seperate disks.
Average Read MB per second 6.71 0.86
Average Writes per second 808.68 103.86
Total Elapsed write time for cycle 198.00 1540.80
SQL = Scarcely Qualifies as a Language
February 1, 2007 at 10:41 am
Hi NHM,
Another couple of tools I have used, along with those that Carl has mentioned, are SQLIO (MS tool) and IOMeter. These tools allow you to gather raw performance stats on the disk subsystem. This includes throughput, IO's per sec (both read and write, random and sequential), and latency. This allows you to gauge how well the system is likely to respond to I/O pressures, such as you're likely to see form SQL Server. Don't compare SQLIO to IOMeter though, always do like for like, ie SQLIO measurements againt SQLIO and IOMeter measurements against IOMeter.
As Carl has mentioned though, it's essential that you have some benchmarks to compare and interpret the data with. It may be worth running this against the production servers, if performance is acceptable, that your test system will be a copy of (obviously plan to do this as it will hit the disk subsystem hard - along with the usual caveats of running such apps on a production database system). As an idea, in a high performing system, I hope to see approx 110 - 140 IO's per sec per spindle for random and around 200 - 230 IO's per sec per spindle for sequential, and latencies of about 10 - 14 ms for random writes and 2 - 4 ms for sequential writes, though these can alter depending on the block sizes you choose in the tool - i tend to look at 64K and 256K as these are the ones SQL Server most often work with. VM values will be nowhere near this, but it gives you an idea of what you could expect to see in a high performing system.
Hope this is useful.
Rgds iwg
February 1, 2007 at 11:58 am
Everybody brings up alot of good points and suggestions and I appreciate the help and will move forward with some of them.
I am still open to other ideas so if anybody has any, please keep them coming.
Thanx,
February 2, 2007 at 3:40 am
Another factor to consider is the use of VMWare RAW disks. I did a SQL benchmark in 2004 with raw compared to native disks compared to a physical OS. With all other things being the same, I saw a 20% improvement when using raw disks. However, even with raw disks VMWare was 50% slower than the physical OS.
The performance of VMWare has probably improved since 2004, but the main limits are still memory and I-O capability. However, if your VMWare SQL sever gives you the performance you need, it does not matter how fast or slow it is.
Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.
When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara
February 2, 2007 at 7:04 am
Defining performance is really important. Most situations would be best suited by trying to run a benchmark of usual activity on your old server, and then do the same on your new server making sure you keep as many variables as possible the same. Then rinse and repeat until you feel you have a good sample set for comparison.
This is probably the most commonly accepted way of testing performance.
If you are really looking to test the overall performance of SQL server on the new server and cannot easily reproduce a typical workload, you may want to consider using DBCC commands and comparing the time to complete them. CheckDB, for instance, will process every page in your database and is very disk and processor intensive.
On a side note, I have had a lot of trouble with SQL Server and VMWare. First, make sure you are using SQL2000 sp4 or SQL2005. Earlier versions of SQL don't report back certain VM disk read errors correctly. Even 2000 sp4 has been a nightmare for me with memory errors and disk read and write errors that seem to have been caused by the virtual adaptors. VMWare is great, but I ended up chasing a lot of ghosts because of it.
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply