Wednesday, May 13, 2009

Stupid Password Masking

Log In/Out tests are commonplace throughout software testing. Pretty much every portal, client/server app, or web site I've worked with has required that I log in with a valid user/pass combo. Those credentials were always listed in the test plans I used. "Step1 - Log in as User1/Pass123 and click Enter."

Now, there's no problem with putting the user/pass combo in the test plan, and then tacking it up on the wall. But why do the managers, whose teams are proudly displaying test user credentials alongside photos of their kids, suddenly balk if an automated tool stores passwords in plain text format?

"Someone else could learn the password" they cry. "This is horrible and insecure!"

OK, let's think about this, for a moment. The tests will be placed in source control. Source control can restrict who can access the tests. Boom - problem solved. Let's look at it from another angle. Let's say that your automated tool masked the password that it entered, so that it just appeared as asterisks. How do you know what password is being entered? If you login is failing over and over, is it because the tool is entering a bum password? You'll never know.

Maybe I'm missing something here. But as a tester, I want to know what password my tests are using. Toggling a "hide password" feature on and off seems foolish too, as I can just toggle it to "show password" and bang - there it is again.

What do you think? Is there something really obvious that I'm missing here?

Wednesday, May 6, 2009

Clean Up Your Screen

When I watch demos, I always find myself distracted from the presenter's message by something on their desktop. Maybe there's an icon in the system tray I haven't seen before (or maybe something down there is blinking). Maybe they have a shortcut to World of Warcraft on their desktop, or maybe their desktop is nothing but shortcuts, giving the feeling that their Start Menu just threw up all over the screen.

I've whittled my desktop down to the Recycle bin, and 3 shortcuts. One is the application I demo with, and the other 2 are apps I use during my demonstration. Everything that the audience sees has a point to being there.

As for the System Tray (or notification area, if you prefer), I recently discovered there's a way to force Windows to hide icons that are displayed there. Right click on the tray and select Properties, then click Customize on the Taskbar tab. You'll see a list of the items in your tray, and you can select if you want them to always be hidden, always shown, or only hidden if inactive. I set everything except my system's volume control and GoToMeeting to Always Hide.

It's a small thing, but it helps keep people focused on the message I'm sending, rather than wondering if I'm playing Horde or Alliance.

Monday, May 4, 2009

Machiavellian Automation

I spoke with a colleague recently about a company whose automated test efforts had suddenly stopped. They had been using a third party resource for all their automated testing, and the money for that resource had dried up. As such, automated test development stopped, and no one in house had the knowledge or expertise to pick up the effort.

This reminded me of something I'd read in Machiavelli's The Prince. Machiavelli argued that princes should not rely on mercenaries or auxiliaries, and instead should rely on their own people. Some of that applies here as well. I think it's great if you want to jump start or augment your efforts by bringing in third party resources, but you need to be building up your own people as well. Sooner or later, that outside resource will be unavailable, either due to funding on your part, or lack of time on theirs. If you've built up your own team so that they can take the effort over, you'll have a smooth transition and no lost time or effort. If you haven't, your people will be scrambling to figure out how the automation works, and you'll waste time and money.

So make the investment in your people. Send them to training, buy them books, encourage them to learn. The end result will be in house expertise, which will make all your efforts smoother in the long run.

Friday, May 1, 2009

Only Bitmap Comparisons? Bad Tester! Bad!

Many automation tools have the ability to take a screenshot of something in your AUT, and compare that to a baselined value. The intention is that you can use this to verify that images have rendered properly in your app; so things like your company logo, a product's photo, or the like can be validated easily. But I still come across people who want to use bitmap comparisons as their only means of validation in their tests.

Their reasoning seems sound enough. "If we just click through each screen of our app, and do a bitmap comparison of each full screen, then we can easily confirm the app is working. The bitmap comparison will automatically validate every value in every field, so we'll be able to create our tests even faster!"

It sounds good, doesn't it? But it doesn't work that way in practice. In fact, this has to be one of the most fragile, breakable, and ineffective approaches to automation. See, bitmap comparisons are going to do a pixel by pixel comparison of each screen. That means that if a single pixel is off, the test fails. In some cases, just running the same test on a different OS is enough to cause every bitmap comparison to fail, because a dialog in WinXP may have a screen where one pixel is slightly lighter than that same pixel in Vista. To your eyes, they don't look any different, so you won't know there's a problem until it's too late. Imagine that suddenly all your tests fail, and for no good reason. If this happens, you'll wind up in a maintenance nightmare as you struggle to track down that rogue pixel and somehow convince your automation tool that this is really ok. Automation is supposed to make testing easier and faster, not the other way around.

So please, take my advice. Use bitmap/region/image (or whatever your automated tool calls them) comparisons only to compare actual images. If you want to verify the contents of a field, read the actual value from that field. If you want to verify a button is enabled, check its actual Enabled property. This may require a bit more effort up front, but the payoff in the long run is well worth it. You'll have more efficient, more stable tests that will run clean, and it won't matter if that one rogue pixel off to the left is grey, a slightly darker shade of gray or a slightly lighter shade of gray than the ones around it.