How to Take Part in a Beta Test
Before we begin to answer the question of how to perform a beta test, we should answer a much more important question. Why on earth would you want to take part in a beta test? Think about it. Why do you want to install software that is only partially tested on to your system? We're concerned with data, so you can follow up that question with this one; why do you want to trust your data to partially tested software? You're really interested in watching your machine crash because of incorrectly handled memory errors? You want to see data corrupted because of badly constructed CASE statements? Wait a minute. Come back. I've got some good reasons why you might be interested in taking these types of risks.
Why Beta?
The reasons for participating in a beta can vary from the personal to the most calculated of business decisions. Starting with personal, you might enjoy being on the cutting edge and playing with all the bright and shiny new toys. Getting in on a beta gives you the opportunity to be the first kid on the block to use a particular piece of software. What's more, you can help to direct the features and functions that make it out to release. Not only can you be among the first to support a piece of software, but you can actually point to parts of the application with some degree of personal pride and ownership. On a slightly less personal note, getting in on beta's allows you to build your skill set ahead of the release. This can be very useful for consultants to be as far ahead of the curve as possible so that when a software release comes about, you can honestly advertise expertise and knowledge. You may even just like the software or the company producing it and you want to help make their products the best they can be.
From a business standpoint, the decision to support a beta is generally more calculated. The business needs to decide if it wants to follow the upgrade path that closely. Are they actually going to implement the beta within their environment or just prepare to implement the released version of the product the day after it goes out? The pluses and minuses around this decision are unique to any given business. Some businesses may be so thoroughly invested in a particular application that they need to be in front of the releases and possibly helping set the direction for the product. Some businesses may desperately need the latest widget that has been glued on to the side of the new version of the tool and they need to know exactly how well it works. There are even some businesses that will implement a beta product in production in order to get immediate returns on the new functionality.
Where Beta
You've now decided that beta testing is for you. Where are you going to install the product? You'd better make sure you have choices in order to make an informed decision. If you have access to a single box where you do all development, testing, and production, I sure wouldn't add beta software on top of that pile. Let's assume more than one machine is available. You could do the first install of the beta product on a development or test machine. The problems here are that you are likely to interrupt the normal flow of development or testing on that machine. Instead you could install it on to your own machine first. Again, how much risk are you willing to assume during this test. Can you afford for your primary machine to be offline due to errors within this new product? You could install it to an isolated machine, either a second, older server or workstation available to you or a specified test environment. This is a very safe choice in most cases because if you completely crash the machine, nothing important is lost and no other work is affected. Better still, you could take advantage of one of many virtual environments out there to put the beta into a completely isolated location. This means even if the virtual server crashes, no other systems, in most instances, will be affected at all.
One thing to remember, you're participating in a beta in order to actually use the software. The more isolated you make the environment, the fewer the number of users and the less day-to-day use the beta software will receive. You have to balance risk with benefit in order to decide the best place to run the software so that you take on only as much risk as you're willing to assume while achieving as many of your testing goals as you realistically can.
How to Test Beta Software
To a degree, this depends on the software being tested. QA testers can probably provide you with a thorough and deep checklist, but as a guideline, you can test for the following:
- Functionality: Does it work? Do the new features work? Do the old features work like they used to?
- Performance: Is it fast enough? Is it slower or faster than it used to be?
- Interoperability: If it interacts with other software or other applications or just your network, does it make a good neighbor? Does it play well with others?
- Maintenance: Is there overhead associated with the new software?
The first, and probably the most important step; will it install. I've concluded many beta tests at the install point. After you get it installed you need to begin testing the functionality. Depending on why you're performing your beta, this can approached a number of ways. First off, you can start testing the features that you're interested in, either old features that you were already dependent on or new features that you'd like to have. You don't want to gloss lightly over these functions and features. You need to drill down on them. Verify that they work as expected and then see how they work under load, with different parameters, etc. When you get done with the beta, these core features should hold no surprises for you. You should then plan on walking through as many of the other features of the software as you possibly can. There are two reasons for this. First, you may find new features that you'll want to implement. Second, you need to explore the software to see if any surprises or traps are out there. You may not need or use a particular function, but you can guarantee that someone on your team will try using it against the production server during the most stressful day of the year the first time they get their hands on it. Then is not the time to find that it has a little memory glitch that causes the server where it was connected to crash. If there is a help file or other documentation, go through that along with the functionality. This will help you identify that you've thoroughly tested and will show whether or not the help file is adequate to your needs.
Performance testing can be simple or complex. You can simply see if it does the usual work you would expect of it in a timely manner. You could also set up automated testing around the tool in order to measure it's performance under a sliding scale of increasing load. Either way, be sure you know what you're getting out of the tool. Again, changes to the software can lead to unexpected surprises. Now is the time identify those problems.
Interoperability is slightly tougher to test. This almost necessitates taking the tool out of the isolated environment and using with other parts of your system. If you set up a complex enough virtual environments you may be able to do all that testing from there. You'll need to connect the tool to any other servers or software that it works with and test those connections. Has it become more chatty and is therefore going to drag on your network? Does it crash when connected to SQL Server 7 or to SQL Server 2008? Yes, if you're testing more than one beta, you probably should, at some point, put them together to see what happens. Does the install cause other software to stop working? As many different interactions that the software is capable of, you should explore. Think again about the fact that you might need it to connect to Oracle, but if the option is there, someone in your company is likely to use it. Be sure it works before releasing it into the wild.
Lastly, does the new software have special care and feeding needs? Do you need to increase the disk space, number of processors, or amount of memory? Are there new routines to clean up or schedule that you'll need to work in with your other maintenance routines. Does it need scheduled down times. Is it working like SQL Server 6.5 and requiring a weekly reboot? Can you live with that?
Done With the Beta
You've finished all the testing that you were interested in, now it's time to sit back and wait for the software to get released, right? Wrong. During all your testing, you should have been documenting each and every problem that you found. Show exactly how to recreate it (without including proprietary code or data, of course) and then transmit all that information back to the developers. They may need log dumps or performance metrics off your machine, so either have them ready or be able to supply them. If you didn't like the way a function behaved, it was too slow, the interface was obtuse, etc., you need to document the issue and supply suggestions on what you'd like to see. This also needs to be communicated back to the developers. While you're participating in the beta for possibly selfish reasons, enlightened self-interest makes it imperative that you report the problems so that they can be fixed prior to release. Most companies will set up news groups or forums or some method of communication between you and the company or even between you and other beta testers. Take part in these forums. See if other people have already tested areas of the application. That help you avoid known issues. Maybe there's a work around for the problem you're encountering. You'll get all that information and more. Post your results as well. Other people may want to know that a particular feature crashes under load or when a unique set of parameters are passed to it. The whole idea in taking part in a beta is to learn about the software and influence the released product. This communication is the core of achieving those two points.