Friday, January 30, 2009

Augment Manual Tests with Automation

When people think of test automation, the knee-jerk reaction is to think of some tool that performs a test or test-related activity for you. That tool runs unattended and gives you results of some kind when it's finished. That's definitely part of automation, but by no means the only thing it can do. You can also use it to create utilities that testers can use to speed up the testing process.

Here's a scenario for you - a tester needs to confirm that a text box can accept strings up to a given length. If there are too many characters in the string, the AUT throws an error message. We've probably all seen and done this type of boundary test many times. And you probably created your input strings the same way I did - you opened up Word, held down the X key for a bit, then used the word count feature to see how long of a string you'd made. If it was too long, you deleted some of it. Too short, you added characters until you hit the right length. Then you copy/paste that string into your box, see if the right error was displayed and moved on to the next test.

That's a tedious way to perform a simple boundary test. I got frustrated with doing that and built a small helper utility whose sole purpose was to generate strings. It was just a small dialog that asked how many characters I wanted my string to be, and after I entered that, it generated a string of that length and automatically placed it on the clipboard. I pasted the string into the desired field, confirmed that my error was displayed, and moved on. I didn't have to mess around with Word anymore, and I could create those needed strings much faster and more accurately than before.

I shared that helper utility with other members of my team and it was very well received. Several of the developers on the team asked for it too, so they could do some preliminary tests before the code got passed over to the testers. Over time, the String Generator evolved so that people could just click a button to "quick-create" strings of a predefined length.

The string generator only took me a few minutes to write, and it turned out to be a real time saver. Always keep your eyes open for places where a little bit of code can save a lot of manual effort.

Wednesday, January 28, 2009

If You Want to Automate Tests, Learn to Script

Many people who get tasked with automating test cases have never written code before. As such, they rely on the promises from UI tool vendors that they can automate all their tests without ever touching a line of code. While UI automation tools may be able to create simple tests, those tests tend to be brittle and prone to breakage. As such, a non-scripting user can quickly become frustrated and abandon test automation all together. Compare that to a person with some scripting experience - they can very easily modify a recorded script and turn it from something brittle into something nearly bulletproof.

When I started out in automation, my background was in technical writing. I had no programming experience or training, but I saw how powerful automation could be, and was determined to see it work. I spent a lot of time learning TSL, which was WinRunner's proprietary language, and found that I could make tests really sing by coding them myself. (My colleagues who were relying on record & playback were astounded that my scripts could recover from errors. )

Eventually, I realized that there were other automatable tasks that fell outside the scope of WinRunner. I learned Python and with that I was able to compare the contents of excel spreadsheets, verify that email messages had been posted to a server and then delete them, automatically reboot systems and put them into the proper state for testing. Those wound up being even bigger timesavers.

So the moral of the story is - if you're going to get into automation, learn how to script. Don't believe anyone who tells you that all your test automation goals can be achieved without ever touching a single line of code. By learning how to script, you become more marketable, and your company's automation efforts have a better chance of success.

There are a lot of great scripting languages out there - Python, Ruby, JavaScript. If there are developers in your company who already know one of them, ask them to recommend a book or website that will help you learn. If you're working with .NET, I highly recommend A Tester's Guide to .NET Programing
and Effective Software Test Automation

Monday, January 26, 2009

Find the Pain

One of the biggest mistakes I see made with test automation is that people don't step back and determine where their manual testers are spending lots of their time. Instead, they just purchase a UI automation tool and then begin automating tests. In some cases, they may get lucky and automate a test that is a time saver. Other times, they may just take a test can be run manually in 5 minutes, and turn it into a 3 minute automated test. While there is a time savings there, it's going to take a lot of runs in order for that 2 minute savings to really add up.

UI automation tools have their place, but before you consider doing any type of automation, take a good look at where your manual testers are spending their time. Do they spend time populating databases with test data? Do they spend time reading through log files for error messages? Are they spending time manually comparing file names and version versions numbers when doing an install test? These are good candidates for automation. SQL scripts can be created to populate databases, perl or python (or whatever your language of choice is) can be used to parse log files for certain strings, and can be used to query file info as well.

In short, significant time savings can be achieved if you incorporate automation all parts of your test activities, not just the running of the tests themselves.

Friday, January 23, 2009

On Evaluating a UI Automation Tool

At some point, manual software testers look at their ever-growing workload and say "Ok, this is getting out of control. There has to be a better way to do this." And the answer comes back from management, "Find an automated tool."

There are a bunch of automated tools out on the market. So many in fact, that it can be overwhelming to find the right one for you. Each tool has its own set of unique features that differentiate it from the others, and your first instinct may be to put together some sort of matrix that compares each tool. That's a fine idea, but hold off on doing it for the moment. Also resist the temptation to just start downloading all the tools blindly. Instead, take a step back and look at each tool with these criteria:

Does this tool support the language my app is written in, and the environments I require?
This may seem obvious, but there are people who never actually checked to see if their programming language was supported. Instead they compared all the "bells & whistles"-type features, and wound up spending thousands of dollars on products that didn't support their needs. One other thing to consider here is if your app makes use of 3rd party controls. To a manual tester, those controls don't seem any different than standard controls, but to an automation tool, they're very different. Ask your development team if there are 3rd party controls in your app, and then find out from the vendor if those are supported. You'll save yourself a lot of headaches and avoid surprises by doing this.

Is this tool in my budget?
I hate to make cost one of the first things to consider, but the unfortunate reality is that the QA department's budget is usually more restricted than other groups. The last thing you need is to spend a lot of time evaluating a tool only to find out you can't afford it.

Those two questions can usually be answered in a few minutes with a quick phone call to the vendor, or with a visit to the vendor's website. Once you've established that a tool or tools will work for you, download an evaluation copy of the product. Again, resist the urge to start a feature comparison matrix at this point. Instead, take two or three of your test cases and try to automate them with the evaluation software. It's a good idea to pick test cases from different areas of your application, to ensure that the tool will work properly for your needs. You don't want to find out after you purchased that while ToolA worked flawlessly against TestCase1, it failed to work at all against TestCases2 - 37.

Once you've done your initial automation, see if there are places where you have questions about how the tool worked. Jot those down and email them to your vendor. To keep the process moving quickly, be as specific as you can when describing your questions. Explain what you're trying to achieve to the vendor, what your application is written in, and then list your questions. Treat these questions the same way you would bug reports; the more info you can provide, the faster the vendor can get answers to you.

When the initial automation is done, and you're waiting for answers to your questions, now you can create that comprehensive feature matrix comparison.

The important thing is to make sure that the tool works well with your application first, and then worry about the niceties. I like to compare this to buying a car; sure, it's great if the car has an all leather interior, heated seats and an aux jack for your mp3 player, but if it doesn't get you from point A to point B reliably, those other features are moot.


Welcome to the Autonomicon. In this blog, I'll post notes about automated software testing, links to helpful utilities that can speed up your day to day activities, and offer my experiences with the QA process in general.