Monday, June 14, 2010

MS Office Automation

I've worked with a lot of people who have wanted to automate MS Office apps. Sometimes they're trying to just write results data out to a Word doc, other times, their company builds a plug-in to an Office app and they're trying to create automated tests for that plug-in. And they all start out by pointing a commercial record & playback tool at the app and clicking that red "record" button. At best, this results in tests that click on obscurely named objects, or at worst, tests that are driven exclusively by X/Y coordinates.

You see, MS Office apps (Word, Powerpoint Excel, Outlook) are not record & playback friendly. I learned this the hard way a long time ago, and I'm hoping to spare others the same pain. You will not be able to create a reliable, robust set of MS Office tests via a record and playback tool. The only way I've found to effectively work with Office apps is via scripting. The Office object model lets you programmatically access any bit of text, cell or slide. For example, this code launches Word, opens a new document, and writes Hello:

Set objWord = CreateObject("Word.Application")
objWord.Visible = True
Set objDoc = objWord.Documents.Add()
objDoc.Selection.TypeText("Hello")


If you try to do those same actions via record & replay, your results will be spotty at best. So when you need to work with Office apps, automate them via their object model. A little known fact is that MS ships help files with office that describe each app's object model, along with coding examples. The location of these files is [OfficeInstallDirectory]\[OfficeVersion]\1033
The app's help files are:
  • Word - VBAWD10.chm
  • Excel -VBAXL10.chm
  • Powerpoint - VBAPP10.chm
There's also an online version here: http://msdn.microsoft.com/en-us/library/y1xatbkd%28VS.80%29.aspx

Keep these handy next time you're doing MS Office automation, and your automation efforts will be much more successful.

Friday, June 11, 2010

Silver Bullets and Snake Oil

After doing test automation for most of my professional life, I'm really tired of vendors claiming that automation is a silver bullet. I'm also tired of test managers claiming that automation is snake oil. Here's the thing. The silver bullet pitch ("Automate all tests with the click of a button; no coding knowledge or original thought needed") is snake oil. I've said before on this blog that if you want to automate, you need to learn how to write script. I still hold by that. So it's the vendors' claims that are the problem, not automation itself.

Let's look at this using a different example - the microwave oven. The little cookbook that came with my microwave claims the microwave can cook any food just as good as the traditional methods. But that's really not true. While the microwave can cook just about everything, the food comes out different. Chicken comes out rubbery, for example. The button marked Popcorn cooks popcorn for too long, and often burns it. There's a recipe for cooking a small turkey, but I'm not courageous enough to try that.

Now, the cookbook is making a silver bullet pitch. "The microwave cooks everything just as good as traditional means" And, based on my experience, that's not the case. However, does that mean that I should throw the microwave away, and denounce all microwave ovens as worthless? No. It means I still use the regular oven to cook chicken, and I manually key in how long to cook the popcorn. In short, I adapt the microwave to my needs and use it for what it does well. I still have a grill, a deep fryer, an oven and a stovetop. I microwave what's appropriate for me to microwave and that's it.

Same thing for automation. I would never recommend (or even try) to automate 100% of your tests. I would recommend trying to automate tasks that are difficult or impossible to perform by hand. I would recommend automating "prep" tasks, like loading a database, building strings for use in boundary testing. I would recommend using tools to automatically parse log files for errors, rather than trying to read them by hand. The application of automation is the important thing here; you need to be smart about what makes sense to automate, just like you need to be smart about what you try to cook in the microwave.

Silver bullets are for werewolves. Snake oil is for 19th century hucksters. Automation is neither. Automation is for testers and developers who want to put in some effort to speed up their existing processes. It's just a tool. That's all.

Wednesday, June 9, 2010

Ethics in Automation

Automation really shines when it's used to speed up a process or to address some tedious task. Unfortunately, some less than scrupulous individuals have asked me to do some things with automation in the past that are just flat out wrong. Case in point: someone once asked me to create an automated test to search on google, and then perform click throughs on a competitor's adwords. The theory was that, since the competitor had to pay each time that link was clicked, it would cost the competitor a ton of money. I'm pretty sure that Google has some sort of mechanism set up to prevent click fraud, but the fact that someone would stoop to measures like this really pissed me off. I told that person no. Firmly.

Another time, someone else asked me to write a script that would artificially inflate the number of times a particular page had been viewed. He felt that if people saw the page had a high number of views, it would seem "more interesting" to other viewers and they'd focus on it. Again, I said no. Firmly.

I don't mean to go high-and-mighty, or holier-than-thou, but the fact of the matter is that unscrupulous business practices shouldn't be performed in the first place, and they sure as hell shouldn't be automated.

Always make sure to use your automation powers for good, not evil.

And now I'll get off my soapbox.

Thursday, June 3, 2010

Cloud vs Traditional: Which Load Test Approach is Better?

Hi All

I'd like to get your opinions on cloud based load testing tools like LoadStorm and BrowserMob vs more traditional tools like LoadRunner and WebPerformance. What advantage do the cloud tools provide that the traditional ones don't, and vice versa?

Here's my background - I was taught how to do load testing with LoadRunner. My mentor said that you should always set up an isolated network and run tests in that environment to get your app's code running well. Once that's done, move the code to a staging/production environment and then generate load from outside the network against your production environment to see how the hardware responds.

Is this still the recommended way of doing things? If so, it seems that the cloud based tools are at a disadvantage because they can't test against an isolated network. Or, can the results generated by the cloud based tools easily identify where the performance bottlenecks are?

Please sound off in the comments.

Thanks
Nick