Friday, February 26, 2010


In the movie "The Fifth Element" Bruce Willis turns to another character and says, "Lady, I only speak two languages - English and Bad English."

And those are exactly the two languages we all need to speak in order to succeed, whether we're testers, developers, sales people or executives. The first is to ensure that we can communicate our ideas clearly and effectively to our audiences. The second is so we can understand and interpret what our customers, co-workers, and others are saying.

Now understand, when I say Bad English, I'm not referring to people who use dangling modifiers, end sentences with prepositions, or any of the other Grammatical Cardinal Sins that your 10th grade teacher warned you about. I'm refering to the rambling stream of consciousness email messages that some people send; the people who speak only in acronyms; the people who use obscure regional slang. In short, I'm saying you need to speak the language of the people who *can't* communicate effectively, so that you yourself *can*.

If you can read a confusing email message, interpret it correctly, clarify it, and then re-communicate it, you'll be able to help others get their messages across, and you'll be able to get your own messages across more effectively because you'll understand how these people think. How many times during a sales cycle or a development cycle have you seen other people bounce emails back and forth, never quite getting their points across, because they weren't speaking each other's languages? I've seen that more times than I can count, and it's a frustrating experience.

So how do you start becoming an interpreter of Bad English? For starters, you need to understand the context of the user's message. What feature or service are they talking about? Are there keywords in their message that might shed some light on what they're looking to do? Put the message in your own words, using the simplest possible language. Once you've done that, run it by the original sender to make sure you've understood correctly. If they say yes, great. If they say no, they'll elaborate on their situation. Don't worry about getting it right the first time; people will appreciate that you're trying to help them.

There are tons of reference materials out there on how to communicate effectively - just poke around on Amazon. If you want examples of Bad English, just ask your support team to show you the messages they receive every day. It's amazing how little detail or thought people will put into a message that's asking for help. It will show you how your users think, and that will help you communicate more effectively with them.

This same principal applies to any actual language - French and Bad French, Spanish and Bad Spanish, etc. The important thing is that you learn to understand what people are saying, and are able to respond to them. The other piece to this is that you're able to effectively communicate and convey your own ideas. Once you can do that, you're one step farther on the path to success in your field.

Friday, February 19, 2010

Perfomance Testing 101 - 4

The last thing I want to talk for performance testing about is tools. There are a lot of great tools out there, both commercial and open source. Do some Googling to see what's available, and what other people think of those tools.

Spend some time evaluating tools before you make a decision. Make sure that the tools can provide the information you need in the format you want before you pull the trigger.

While you're running your load tests, you're going to need to keep tabs on your server's behavior. Make sure that the tool you're working with can monitor the server's resources as unobtrusively as possible. If your tool of choice doesn't have server monitoring capabilities, then you can use a tool built into Windows called Perfmon. Perfmon lets you keep tabs on many things, but at the miniumum, you'll want to track CPUUsage, Available MBytes (the total amount of RAM available on the system), Bytes Sent and Bytes received. This will give you a flavor for how your system's doing overall. You can also track items related to SQL databases or the .NET framework, so look through the list of what's available and see what meets your needs.

As you can see, there's a lot involved with performance testing, and these posts have just touched the surface. The thing to remember here is that performance testing is *very* different from functional testing. You'll need to learn the difference between a 302 and a 304 http return code. You'll need to understand what parts of your application uses system resources and why. If you've never done performance testing, be ready for a pretty steep learning curve. Make sure you budget time, whether it's on or off the clock, to learn as much as you can.

I'm not a performance testing expert by any means, so now I'll point you to the people who are :)

Corey Goldberg has a great blog and builds performance testing tools. Definitely check him out:
Twitter: @cgoldberg

Scott Barber is a perfomance testing guru:

Bob Dugan is an extremely smart performance tester and a great guy to work with:

Here are a couple of helpful books on performance testing:

Performance Solutions: A Practical Guide to Creating Responsive, Scalable Software

Performance Testing Microsoft .NET Web Applications

Friday, February 12, 2010

Performance Testing 101 - 3

Now that we've talked about scenarios, the next thing I want to discuss is your performance testing environment.

Your environment should be an isolated, standalone network. All "normal" network activities like virus scans, automatic system updates and backups should be disabled. You don't want anything running that could skew your test results. Now, some people may question this, saying it's not a "real world" scenario. At this point, you're trying to see how fast your system can perform under the absolute best conditions. It's a way to see, "ok, we are absolutely positive we can handle this number of transactions/second." My experience with this has been if you have other things running, and the numbers come back bad, people will immediately jump on that as the cause. "Oh, you had a virus scanner running - that's probably messing up the results. Uninstall the scanner and run the tests again" If your tests take days to run, that's not what you want to hear, especially if the test results come back the same as they were before.

Having your tests on an isolated network also ensures that normal day to day traffic isn't skewing your results. I once had four hours' worth of testing negated because someone three cubes down started downloading MSDN disks. I wound up having to do my tests at night after everyone had gone home in order to ensure there wasn't any rogue traffic. Again, having an isolated network will save you a lot of testing time and frustration in the long run.

The other thing to keep in mind is that your tests systems should match up to what you'll be using in production. So whatever hardware your system will run in the real world is what you should be testing with. Some IT folks balk at this, citing cost as a problem. Thing is, there are issues you won't be able to find if you're just testing with a bunch of hand-me-down systems. If you're not running on a box that has the same type of processor, how will you know if you're taking full advantage of that processor's architecture? This extends beyond just the systems themselves. Switches, NICs, even the network cable involved can impact your tests' performance. Don't fall into the trap of "well, we'll be using better hardware in the field, so performance will be better", because it might not.

Friday, February 5, 2010

Performance Testing 101 - 2

Ok, so now that we've taken care of terminology and gotten our performance baselines, it's time to start thinking about configuring our scenarios. This is where you figure out how many people will be performing a given action simultaneously. Again, if you are testing a pre-existing website, you can go through your web server's logs to see how many people typically access a given page at a time.

If you're testing a never-before released site, though, this part gets tricky. You'll need to make some educated guesses around your user's behavior. Let's say we're testing an e-commerce site, and we're going to roll it out to 1,000 users. Let's further assume that these users are equally divided between the east and west coasts of the USA (500 on each coast). For test cases, we want to perform logins, searches, and purchases.

Now, the knee-jerk reaction is to say that since you have 1,000 users, your system should be able to handle 1,000 simultaneous logins. That may be true, but is it a realistic scenario? In the real world, your users are separated by time zones, some of them may not be interested in purchasing a product today, some may be on vacation, some of them will only shop while they're on lunch break, and others will only shop at night after their kids have gone to bed. So in considering this, you may want to start out verifying that your system can handle 250 or 300 users logging in simultaneously. Then build up from there.

The next thing to consider is the frequency of the actions. Each user logs into the site once, but once they're in, how many times do they perform a search? If you go to Amazon, how many products do you search for in a single session? Between physical books, eBooks, movies and toys, I'd say I probably search for 2 or 3 items each time I log in. So that means searches are performed 2 or 3 times as often as logins. So if you're planning on having 250 - 300 users login to start with, then you're looking at 500 - 900 searches. Also consider the purchase scenario - not everyone that logs in and searches is going to buy something; Maybe half of the people, or a third of the people who login will actually make a purchase. So you're looking at 75 - 150 purchases for a scenario there.

So take some time and think about your end user's behavior when defining scenarios for load testing - don't make it strictly a numbers game.

Next week, we'll talk about your performance testing environment.