Stress Testing Your SQL Server Databases - A Systematic Approach
Stress Testing your SQL Server environments is a practice that can yield great benefits. Stress Testing can be used for performance tuning, to uncover bottlenecks, and for capacity planning among other things. In this article we will use the terms 'Load Testing' and 'Stress Testing' interchangeably, even though by definition they are different. So for the sake of this article 'Load Testing' or 'Stress Testing' is simply running a large number of statements/transactions against a SQL Server database from various connections or threads - all to give the database server a good workout. Lastly, this article is not about HOW one physically throws a large number of statements at a server to stress test it, there are 3rd party tools on the market and available code on the web to help you do that. Instead we'll discuss what variables are needed and what questions need to be answered to ensure a successful and productive stress test environment.
WHY You Stress Test Drives HOW You Stress Test
The reasons for Stress Testing vary. You may be trying to determine the apparent cause of poor performance on a production server, or trying to plan for an increase of users, or trying to workout a new application that is not yet "live" to determine if it will be able to handle a production load. The reasons vary, but your specific reason will drive your approach to stress testing. For instance, your approach will vary greatly between a pre-production test and one for a live server.
Production Environments
If it is a performance problem in a production environment that you're dealing with, you can first try to determine your problem using Performance Counters sampled throughout your busy times of day (if possible). I realize that this is a Stress Testing article and I'm starting with a non-Stress Test approach, but in the real-world it is easier to start with performance monitoring before trying to gain approval for stress testing a production server. By comparing the gathered performance counter values with generally accepted thresholds you may be able to figure out your problem without running a Stress Test. We will cover some Microsoft listed thresholds later in the article. You may want to perform your monitoring several times during the week, as the load on Monday might be different from Wednesday's load.
One more note about stand-alone monitoring before we move on to our main focus: at times it can be beneficial to gather samples over a 24 hour period or even just at night if day-time sampling is forbidden. Sometimes your late-night SQL Server Jobs and back-ups can provide some insight. For instance, you may be running back-ups from across the network and notice that your Network Interfaces have less than favorable throughput.
If you can't take samples during peak times in your production environment, you will be best served by running off-hour stress tests using the tool of your choice to create the load. In this case you are still working out the hardware/software configurations on the server in question and capturing performance counter values that can lead you to the source of the problem if one exists. If the captured performance counter values do not lead to insight, then concentrating on the execution times of application stored procedures and scripts may help uncover the problem - perhaps dead-locking, or simply bad indexing. If needed, you can open up SQL Profiler for in-depth tracing information while your stress test is running.
If you're looking to see how your production environment will scale with an increase in users, then stress testing is the way to go. You can dial-up virtual users to hammer your server and monitor its response. In this scenario, as well as all others, you want to gather Performance Monitor samples and execution times. This is also a great way to validate the need for new hardware, walking into your boss' office with a stress test report shows that you have done your homework.
Non-Production Environments
For pre-production tests or any tests that won't be held in a live production environment, Stress Testing your database can be invaluable. You will want to run your tests in a production-like environment. Simulating that can be difficult, but with automated stress testing packages you can attempt to mimic the amount of load that your servers will face and scale that load up to really make your server work. Commercial Stress Testing applications like Agilist's SQL Stress Test or Quest's Benchmark Factory enable you to create a large number of virtual users that will execute the SQL commands of your choice. A good Stress Testing package will create separate threads for each virtual user and will avoid connection pooling. In a non-production environment you typically have a lot of freedom, nevertheless, there are still questions that you will want to think about for testing in both Production and Non-Production environments.
Pre-Stress Test Questions
Here are some questions to ask prior to running your tests:
- How many virtual users will you want to test with? If this will be the back-end of a web or software application, then what is the maximum number of concurrent users that will be using it?
- What are the main SQL statements and procedures that you will execute during the test? If you are testing the procedures called by an application then answering this question should not be difficult. Some third party Stress Testing Tools like Agilist's, will even record the sql statements being executed by users while they use the web or software front-end.
- How long should each statement or procedure take to execute? I.e. what would be an acceptable return time?
- If this is a pre-production database, does it contain a fair amount of data so that your tests are working in a production-like environment?
- What location do you want to run the Stress Test and monitoring from? If monitoring from across the network you will incur some DCOM hit, if monitoring locally on the server you will eat up marginal resources.
A Good Stress Test is Measurable
Throwing a hail storm of statements at your SQL Server is only part of a productive Stress Test. First and foremost, the test must be measurable. So while giving our server a workout we must gather performance related statistics. Usually we'll gather Performance Counters and if possible the actual execution times of the statements themselves. You can gather Performance Counter statistics using Microsoft's Performance Monitor (PerfMon) or the 3rd party tool of your choice.
For these statistics to mean anything we must be able to measure them against something, usually these "somethings" are baselines that we have created some time in the past or accepted thresholds. Baselines are simply previously saved results that we can compare against. We also can compare our tests against thresholds listed by Microsoft and industry professionals as mentioned previously. You can create a baseline the first time you run a test, simply save your results and now you magically have a measurable baseline that can be recalled for future comparisons. As time goes by you will want to periodically create baselines for comparison purposes. Baselines are also great for trending and determining growth rates.
Which Performance Counters To Use?
When choosing Performance Counters we don't just want to know how our SQL Server software is performing, we also want to know how our hardware and network is performing. The best list of core counters that I have come across, and have been using for years, come from an article by Brad McGehee entitled "How to Perform a SQL Server Performance Audit ". I experiment with some other counters but always use the ones mentioned in the article as the foundation. The counters are:
- Memory: Pages/sec
- Memory: Available Bytes
- Network Interface: Bytes Total/Sec
- Physical Disk: % Disk time
- Physical Disk: Avg. Disk Queue Length
- Processor: % Processor Time
- System: Processor Queue Length
- SQL Server Buffer: Buffer Cache Hit Ratio
- SQL Server General: User Connections
Which Thresholds To Use?
After monitoring your server, you will need to measure your captured counter values against trusted thresholds so that you know whether or not a problem may be at hand. I compiled a list of thresholds from Microsoft resources such as TechNet and others that match the list of Counters above. Below are the MS values along with some comments. When comparing your values to these you should always ask yourself if the value that you have collected was sustained over a period of time or if was just a spike - sustained values are obviously much more appropriate for comparison.
- Memory: Pages/sec: If counter value is consistently > 5 you may likely have a memory issue.
- Available Bytes: Values < 10 MB should raise a serious red flag for you.
- Network Interface: Bytes Total/sec: Microsoft simply advises that if you notice this value dropping it may indicate network problems. By rule of thumb you can use a value of half of the available network interface bandwidth as being acceptable. So for a 100 MBS network adaptor, the value of the Bytes Total/sec performance counter can be 50 MBS or greater.
- Physical Disk: Avg Disk Queue Length: You will need to calculate this value based on the number of physical drives. It is simply the monitor value / # of disks. A value greater than 2 might indicate an I/O bottleneck. The number of waiting I/O requests should be sustained at no more than 1.5 to 2 times the number of spindles making up the physical disk.
- Physical Disk: % Disk time: The recommended numbers for this seem to vary, if the value is greater than 50% you should be concerned and investigate more. If the value is sustained above 80% it is a serious problem and you may have a memory leak or I/O issue at hand.
- Processor: % Processor Time: Values in excess of 80% processor time per CPU are generally deemed to be a bottleneck.
- System: Processor Queue Length: A sustained queue length > 2 (per processor), generally indicates a processor bottleneck. For instance, if you have 2 processors then a value of 4 or less is usually acceptable.
- SQL Server Buffer: Buffer Cache Hit Ratio: A rate of 90 percent or higher is OK. The closer to 100% the better. Less than 85% indicates a problem.
- SQL Server General: User Connections: This one varies of course. It should be tracked though so that you can determine what is "normal" for your environment. With this you can spot trends that may signal growing demand, or you may use it to explain spikes in other counters.
Additional Threshold and Monitoring Resources
There are many good articles that deal with monitoring activity and thresholds. Listed below are some worthwhile resources:
- "Performance Monitoring - Basic Counters" - Steve Jones
http://www.sqlservercentral.com/columnists/sjones/performancemonitoringbasiccounters.asp
- Monitoring Disk Activity - Microsoft
http://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/sag_mpmonperf_19.mspx?mfr=true
- Monitoring Memory - Microsoft
http://www.microsoft.com/technet/prodtechnol/windows2000serv/reskit/prork/prec_evl_bzcl.mspx?mfr=true
- Troubleshooting Performance in SQL Server 2005 - Microsoft
http://www.microsoft.com/technet/prodtechnol/sql/2005/tsprfprb.mspx
- Performance Tuning Checklist 4.5 (NT Based Systems) - Microsoft
http://www.microsoft.com/technet/prodtechnol/bosi/maintain/optimize/bostune.mspx
Conclusion
There are many things to think about before stress testing your SQL Server databases. I hope that this article outlined many of those items in a manner that enables you to hit the ground running for your stress testing endeavors. A well-planned stress test can be extremely beneficial, among other things it enables you to test application/procedure/database performance before going into production, troubleshoot performance, and plan for the future.
Anthony Bressi is owner of Agilist Technologies Inc. which specializes in software for DBA's and SQL Server developers. Mr. Bressi has over 9 years of hands-on experience in the Microsoft SQL Server development environment.