Before taking the plunge into test automation, many managers want to know how long until they see an ROI (return on investment). In the case of a commercial tool, some vendors will actually send out an Excel spreadsheet with formulas that let you plug in the amount you pay a tester per hour, and then the spreadsheet calculates how much it costs to find that bug during the test phase vs how much it costs to find it once the app has gone to the customer.
While I certainly understand the need to justify an expense, I think ROI by itself is the wrong way to look at test automation, because it puts the focus on dollars rather than on utility. If you buy a $9,000 automation tool, and pay your tester $40/hour, then using the ROI approach, your test automation tool needs to run 225 hours' worth of tests before your see an ROI. If you've bought multiple copies of that tool, then you need to multiply both the total cost and the total number of hours by the number of copies you purchased. So if you bought 5 copies of the test tool, that's $45,000 you spent, and you need to run 1,125 hours' worth of tests to break even. "That's too much time or money", someone says. "Look at a cheaper tool."
What I don't see people consider is how long it will take before the automation is doing something productive. So let's say that you're considering spending money on a commercial tool. Instead of saying "How long until I see an ROI", ask "How long until this lets me do something else?" Automation is implemented so that you don't have to do the same tasks over and over; you automate to get the computer doing regression tests so you can focus on other test activities. How long will it take you to create that first smoke test? How long until you are freed up to do the new tests? Let's say it takes you 2 work days to get your smoke test up and running. That means that two days after implementing automation, you start getting some of your time back. That's how long until the automation is doing something useful. I call this Time Until Useful (TUU).
I think that 16 hours until your automation is doing something useful is a lot more pallatable than 1,125 hours until your automation justifies its existence. Plus, now you're working and finding bugs that you didn't have time to find before. I've never seen that time savings or "new-bugs-found-during-exploratory-testing-while-the-automation-runs-a-smoketest" line item included on an ROI sheet.
If you're considering open source tools, TUU is much more a natural metric. Freed from cost concerns, a test team is given the chance to look at a bunch of tools and pick the one that's going to let them get the most done the fastest. I've seen teams where managers say 'oh, you're going open source?' and then promptly go deaf because they think that since there's no cost involved, they don't need to justify any expenses, therefore they don't care. Thing is, open source tools will have a bit of a learning curve as well, and you want to make sure that your team selects one that meets their needs and still has a good TUU. If the tool is free, but it takes 2 months before it's doing anything of value, is that worthwhile? No, you'd want them to look at a different open source tool that's going to do something sooner than that.
TUU is a metric that should be considered with the same weight (or maybe with greater weight) as ROI. Whether you're going commercial or open source, you should always consider how long it will be until the automation is doing something useful.