Ben Simo (http://www.questioningsoftware.com/) mentioned via Twitter that he wanted to do a presentation on good automation coding practices. That got me thinking about the things I do to keep my code maintainable, readable and ultimately, useful as projects progress. Here's a quick list of some of my suggested coding practices:
Give functions/methods and variables descriptive names. When you're writing your code, don't name functions arbitrarily, like Test1. Instead, give them descriptive names so it's immediately obvious what the code is going to do. For example, "verifyAdminUserCanDeleteReport" may be a bit wordy, but you know exactly what it's going to do. Similarly, give your variables descriptive names as well. I like to prefix my variables with a shorthand representation of their type; for example, str for string, int for integer. So if I created a variable for a user name, I'd name it strUserName. This helps eliminate confusion, especially when you're dealing with a big test.
Comment everything. The code may seem completely clear to you as you write it, and so commenting may seem like a waste of time. But chances are when you leave it for 6 months to do something else, when you get back to it, you won’t remember what you were doing. Or, when someone else inherits your script, they won’t know what’s going on. Comments provide a nice, easy way to prevent that type of confusion. I like to include information like this with each function I create
/*
Name:
Description:
Syntax:
Parameters:
Return Values:
Example:
*/
You can also use utilities like TwinText to generate help files based on your comments. This provides a nice reference that people can browse for functions that may be useful in their own testing.
Don't chain tests. I see this a lot in UI automation, where "Test B" is dependent on "Test A"’s results. For example, let’s say you are testing a telephone directory application. Test Case A scrolls to the “Howland, Jessica” user record in the phone book and verifies its data. Test Case B clicks the Next button 3 times to get to the “Jameson, Rita” user record, and then verifies its data. If Test Case A fails to reach “Howland, Jessica”, there’s a chance that Test Case B will fail because the number of clicks needed to get to “Jameson, Rita” are dependant upon being at the “Howland, Jessica” record. This is a chain, and could lead to problems with your script. A better approach would be to start at a known point in the database (such as the first record), and use a parameterized loop to click Next until “Jameson, Rita” is displayed in the Name field.
Use source control. This lets you keep a versioned history of your tests, and lets you see who's edited what parts of the tests.
Keep your own backups. The source control server may be backed up, the network drive where you keep your working copies may be backed up, but do yourself a favor and keep your own backups of your tests. External hard drives are cheap, and writing a short script to copy everything from your working directory on to the external drive will only take a minute or two. This can save you a lot of headaches and frustration when servers crash; when your IT guys realize the backup drives are full and haven't succesfuly backed up in a month; or when you get told that your restore task is at the bottom of the todo list.
What good practices would you recommend?