Friday, January 23, 2009

On Evaluating a UI Automation Tool

At some point, manual software testers look at their ever-growing workload and say "Ok, this is getting out of control. There has to be a better way to do this." And the answer comes back from management, "Find an automated tool."

There are a bunch of automated tools out on the market. So many in fact, that it can be overwhelming to find the right one for you. Each tool has its own set of unique features that differentiate it from the others, and your first instinct may be to put together some sort of matrix that compares each tool. That's a fine idea, but hold off on doing it for the moment. Also resist the temptation to just start downloading all the tools blindly. Instead, take a step back and look at each tool with these criteria:

Does this tool support the language my app is written in, and the environments I require?
This may seem obvious, but there are people who never actually checked to see if their programming language was supported. Instead they compared all the "bells & whistles"-type features, and wound up spending thousands of dollars on products that didn't support their needs. One other thing to consider here is if your app makes use of 3rd party controls. To a manual tester, those controls don't seem any different than standard controls, but to an automation tool, they're very different. Ask your development team if there are 3rd party controls in your app, and then find out from the vendor if those are supported. You'll save yourself a lot of headaches and avoid surprises by doing this.

Is this tool in my budget?
I hate to make cost one of the first things to consider, but the unfortunate reality is that the QA department's budget is usually more restricted than other groups. The last thing you need is to spend a lot of time evaluating a tool only to find out you can't afford it.

Those two questions can usually be answered in a few minutes with a quick phone call to the vendor, or with a visit to the vendor's website. Once you've established that a tool or tools will work for you, download an evaluation copy of the product. Again, resist the urge to start a feature comparison matrix at this point. Instead, take two or three of your test cases and try to automate them with the evaluation software. It's a good idea to pick test cases from different areas of your application, to ensure that the tool will work properly for your needs. You don't want to find out after you purchased that while ToolA worked flawlessly against TestCase1, it failed to work at all against TestCases2 - 37.

Once you've done your initial automation, see if there are places where you have questions about how the tool worked. Jot those down and email them to your vendor. To keep the process moving quickly, be as specific as you can when describing your questions. Explain what you're trying to achieve to the vendor, what your application is written in, and then list your questions. Treat these questions the same way you would bug reports; the more info you can provide, the faster the vendor can get answers to you.

When the initial automation is done, and you're waiting for answers to your questions, now you can create that comprehensive feature matrix comparison.

The important thing is to make sure that the tool works well with your application first, and then worry about the niceties. I like to compare this to buying a car; sure, it's great if the car has an all leather interior, heated seats and an aux jack for your mp3 player, but if it doesn't get you from point A to point B reliably, those other features are moot.

No comments:

Post a Comment