If you have a lot of systems in your test lab, it can be hard to remember what each one has for specs. Enter BGInfo. This handy little app displays all manner of system information as your desktop background, including CPU, RAM, IP Address, System Name, and more. You can also customize it with script to show any other relevant info, like what version of MS Office or Firefox a particular system has.
BGInfo is free to download from this location.
Enjoy!
Monday, August 31, 2009
Friday, August 28, 2009
Helpful Tool: Recuva
"Oh Sh*t!"
We've all heard a co-worker say something like that when they've accidentally deleted a file. Sometimes, it's in the Recycle Bin, but for those unfortunate souls who press Shift+Delete, deleted a file from a flash drive, or who emptied the Recycle Bin without thinking, there's Recuva.
Recuva's a great little tool for recovering deleted files. It's free, takes limited system resources, and can be run from a flash drive. It's a great tool for anyone who troubleshoots computers.
Check it out at http://www.recuva.com/
We've all heard a co-worker say something like that when they've accidentally deleted a file. Sometimes, it's in the Recycle Bin, but for those unfortunate souls who press Shift+Delete, deleted a file from a flash drive, or who emptied the Recycle Bin without thinking, there's Recuva.
Recuva's a great little tool for recovering deleted files. It's free, takes limited system resources, and can be run from a flash drive. It's a great tool for anyone who troubleshoots computers.
Check it out at http://www.recuva.com/
Wednesday, August 26, 2009
Monitor Your Environment
It's important to keep tabs on what's installed on your test systems. Changes in your environment can alter test results and really throw a wrench in your day.
I was running performance tests once. My developer and I had tweaked the systems to their best performance, and the automated tests had been refined. We ran tests before we went home that night, and confirmed our results were good. The next morning, our VP wanted us to run the tests again so he could watch. Imagine our surprise when all our operations were taking 1.5 - 2x longer.
My VP immediately suspected that the test tool was faulty, and that's never a good way to start a discussion. Luckily, I had a written a python app that ran in the background on all my systems. It was a simple little affair, it polled the Uninstall section of the Windows Registry and logged anytime a value was added or removed. During the night, a windows update had been automatically applied, which adversely impacted my system's performance. Without my utility, there's no way I would've caught that.
We spent a couple hours on the phone with Microsoft and got a workaround to the problem. That done, we ran the tests again, and viola, everything was happy. I quickly disabled automatic updates on all my systems from that point on.
I lost a couple of hours to this problem, and my test automation's rep was literally on the line because of an automatic update. So keep tabs on your environment, and make sure you know what's happening to it.
I was running performance tests once. My developer and I had tweaked the systems to their best performance, and the automated tests had been refined. We ran tests before we went home that night, and confirmed our results were good. The next morning, our VP wanted us to run the tests again so he could watch. Imagine our surprise when all our operations were taking 1.5 - 2x longer.
My VP immediately suspected that the test tool was faulty, and that's never a good way to start a discussion. Luckily, I had a written a python app that ran in the background on all my systems. It was a simple little affair, it polled the Uninstall section of the Windows Registry and logged anytime a value was added or removed. During the night, a windows update had been automatically applied, which adversely impacted my system's performance. Without my utility, there's no way I would've caught that.
We spent a couple hours on the phone with Microsoft and got a workaround to the problem. That done, we ran the tests again, and viola, everything was happy. I quickly disabled automatic updates on all my systems from that point on.
I lost a couple of hours to this problem, and my test automation's rep was literally on the line because of an automatic update. So keep tabs on your environment, and make sure you know what's happening to it.
Monday, August 24, 2009
Automation Honors
I've just found out that my blog is a finalist for an Automation Honors award from ATI. ATI is a great resource for automated testing, and I'm honored to have been nominated.
Please head over here and vote for me!
http://www.automatedtestinginstitute.com/home/index.php?option=com_mad4joomla&jid=2&Itemid=137
You can learn more about ATI here: http://www.automatedtestinginstitute.com
Please head over here and vote for me!
http://www.automatedtestinginstitute.com/home/index.php?option=com_mad4joomla&jid=2&Itemid=137
You can learn more about ATI here: http://www.automatedtestinginstitute.com
Test Automation Does Not Replace Testers
Back when automobile companies started putting robots on the assembly line, lots of blue collar workers lost their jobs. The robots were more efficient, faster, and didn't need bathroom breaks. By switching over to an automated process, the auto companies produced more cars faster. And according to them, they were produced cheaper and more efficiently.
Some folks have the same thought about automated testing. "If we automate all our tests, we can lay off half our test team! Think about the cost savings!"
These people are idiots.
See, testing isn't like auto manufacturing. An assembly line worker who's putting wheels on a car does just that. He's not inspecting the wheel for defects, he's not verifying the welds on the axle where the wheel goes, he's just attaching the wheel. There's an inspector later on down the line that checks the work. That assembler's work can be automated with no problem, because he's doing a simple, repetitive, monotonous task.
Testing is different. Yes, there are monotonous regression tests that need to be run, but even when your testers are following a test case, they're still observing beyond what's in the test. If a test step says "Click the OK button to see the login screen" and doing that shows the login screen, that's great. But if clicking that button shows the login screen and also turns the screen bright pink, a tester will log a bug, even though the scripted behavior is correct. Robots don't think beyond what they're told. They can't deduce, reason, or infer. Remember that.
Also, when a robot replaces an assembly line worker, it completely replaces all tasks that worker did. In my example above, the only thing that worker did was put tires on a car. It's highly unlikely that your manual test team only has a handful of test cases. More likely, they're scrambling to make sure the basic functionality test cases are covered. Automating the basic tests will free them up to work on more advanced tasks, which, let me assure you, there is no lack of.
Automation augments your testers, and it lets them work more efficiently. But it should never be viewed as a way to replace the people on your test team.
Some folks have the same thought about automated testing. "If we automate all our tests, we can lay off half our test team! Think about the cost savings!"
These people are idiots.
See, testing isn't like auto manufacturing. An assembly line worker who's putting wheels on a car does just that. He's not inspecting the wheel for defects, he's not verifying the welds on the axle where the wheel goes, he's just attaching the wheel. There's an inspector later on down the line that checks the work. That assembler's work can be automated with no problem, because he's doing a simple, repetitive, monotonous task.
Testing is different. Yes, there are monotonous regression tests that need to be run, but even when your testers are following a test case, they're still observing beyond what's in the test. If a test step says "Click the OK button to see the login screen" and doing that shows the login screen, that's great. But if clicking that button shows the login screen and also turns the screen bright pink, a tester will log a bug, even though the scripted behavior is correct. Robots don't think beyond what they're told. They can't deduce, reason, or infer. Remember that.
Also, when a robot replaces an assembly line worker, it completely replaces all tasks that worker did. In my example above, the only thing that worker did was put tires on a car. It's highly unlikely that your manual test team only has a handful of test cases. More likely, they're scrambling to make sure the basic functionality test cases are covered. Automating the basic tests will free them up to work on more advanced tasks, which, let me assure you, there is no lack of.
Automation augments your testers, and it lets them work more efficiently. But it should never be viewed as a way to replace the people on your test team.
Friday, August 21, 2009
Helpful Tool: Process Monitor
Have you ever wanted to watch and see exactly what files your app was calling, what registry keys it was working with, or what dlls it was loading? Then Process Monitor is for you. This handy little app can tell you everything that's running on your system, or can be filtered down to a specific process. You can download it for free from here.
It's Windows only, so if there are similar tools that you use for Linux, please sound off in the comments.
It's Windows only, so if there are similar tools that you use for Linux, please sound off in the comments.
Wednesday, August 19, 2009
Helpful Tool: WinMerge
Need a tool that can do a diff of those really long reports you just generated? Have two script files that look like they contain the exact same info, but they're behaving differently? You need a diff tool. From their site:
You can download WinMerge from here.
"WinMerge is an Open Source differencing and merging tool for Windows. WinMerge can compare both folders and files, presenting differences in a visual text format that is easy to understand and handle"WinMerge is Windows only, so sound off in the comments if you have a favorite diff tool for OS X or Linux.
You can download WinMerge from here.
Monday, August 17, 2009
Helpful Tool: Wireshark
When you're doing web testing, whether it's functional or performance, it's always a good idea to be able to "see" exactly what's going across the wire. That's where Wireshark comes in. Wireshark is an open source packet monitoring tool that lets you see each individual request that's made, and the response that gets returned. It's available for Windows, OS X & Linux, and can be downloaded for free from here:
www.wireshark.org
www.wireshark.org
Thursday, August 13, 2009
Simple Tests
There's a great episode of Star Trek: The Next Generation where the omnipotent Q is made human. One of the tasks facing the crew during this episode is a moon that's losing orbit and threatening to crash into its planet. Q tries to help Geordi and Data prevent this disaster, and tells them that there's a simple solution. Geordi, excited, and asks what it is. Q's response: "Change the gravitational constant of the universe." Geordi's less than thrilled with this answer to say the least, but that's how Q would have handled the situation.
I see something similar when people are first evaluating automated test tools. They come up with a "simple test" and if the tool fails to perform that test, they immediately write the tool off as useless.
It's true there are times when a given automated tool isn't a good fit for a particular application, but keep perspective here. You, the tester, are on the same level as an empowered Q. You see everything in your application, and know how to make it work. The test tool doesn't have your ability to reason, to identify the cause of problems or to adapt. The test tool doesn't "see" the application the same way you do - it sees your application the way a *computer* sees it. You see a button labeled OK. Your test tool sees an extended Winforms control with a dynamically generated ID like cmdOK87823, or worse, the code may be obfuscated so that a test tool can't read any information from it.
The tool is operating on a much lower level playing field that you are, and it finding that dynamically generated button on an obfuscated application is just as impossible as it would be for Geordi to change the gravitational constant of the universe. So what to do in situations like this? You find ways to make the tool work with your application. In the TNG episode I mentioned, Data & Geordi found a way to send a warp field around the moon that would do something very similar to Q's solution. To find your dynamically generated controls, you could insert wildcards into your test scripts, so the tool would find cmdOK*, and thus match whatever identifier had been dynamically generated. You may need to run your tests against unobfuscated code. Just be aware that tasks like this may be a necessity, and you'll be much better off when it comes time to automate.
I see something similar when people are first evaluating automated test tools. They come up with a "simple test" and if the tool fails to perform that test, they immediately write the tool off as useless.
It's true there are times when a given automated tool isn't a good fit for a particular application, but keep perspective here. You, the tester, are on the same level as an empowered Q. You see everything in your application, and know how to make it work. The test tool doesn't have your ability to reason, to identify the cause of problems or to adapt. The test tool doesn't "see" the application the same way you do - it sees your application the way a *computer* sees it. You see a button labeled OK. Your test tool sees an extended Winforms control with a dynamically generated ID like cmdOK87823, or worse, the code may be obfuscated so that a test tool can't read any information from it.
The tool is operating on a much lower level playing field that you are, and it finding that dynamically generated button on an obfuscated application is just as impossible as it would be for Geordi to change the gravitational constant of the universe. So what to do in situations like this? You find ways to make the tool work with your application. In the TNG episode I mentioned, Data & Geordi found a way to send a warp field around the moon that would do something very similar to Q's solution. To find your dynamically generated controls, you could insert wildcards into your test scripts, so the tool would find cmdOK*, and thus match whatever identifier had been dynamically generated. You may need to run your tests against unobfuscated code. Just be aware that tasks like this may be a necessity, and you'll be much better off when it comes time to automate.
Wednesday, August 12, 2009
Snap Decisions
Occasionally someone will contact me on a Wed afternoon and tell me that they need a demo of my software right away, because they have to make a decision whether to buy it tomorrow. Now, some people are saying this because they think they'll get attention faster, but some people legitimately plan on making a purchase within 24 hours.
These calls scare the daylights out of me. My company provides a 30 day evaluation of our software so that people can try it out and make sure that it works well for them. I can show things off against sample applications, but at the end of the day what matters is that it works with their application. By relying solely on my demo, that second part is completely overlooked.
I liken this to buying a car. Would you walk into a dealership and purchase a new car just based on the commercial you'd seen? No, you want to take it for a test drive. You want to know how well it corners, how loud the engine is, how comfortable the seats are.
So if you have someone on your team who's pushing for a snap decision on a product, do everything you can to keep that from happening. Make sure some time has been spent ensuring that the program meets your needs. Otherwise you could end up in a bad spot where a tool has been purchased, it doesn't work, and now you have no money to get something else in place.
These calls scare the daylights out of me. My company provides a 30 day evaluation of our software so that people can try it out and make sure that it works well for them. I can show things off against sample applications, but at the end of the day what matters is that it works with their application. By relying solely on my demo, that second part is completely overlooked.
I liken this to buying a car. Would you walk into a dealership and purchase a new car just based on the commercial you'd seen? No, you want to take it for a test drive. You want to know how well it corners, how loud the engine is, how comfortable the seats are.
So if you have someone on your team who's pushing for a snap decision on a product, do everything you can to keep that from happening. Make sure some time has been spent ensuring that the program meets your needs. Otherwise you could end up in a bad spot where a tool has been purchased, it doesn't work, and now you have no money to get something else in place.
Monday, August 10, 2009
Automating Installs
A common scenario in a manual test case is something like this:
1 - Uninstall old version of application
2 - Install new build
3 - [Perform actual test here]
When people are starting out with test automation tools, they often want to automate steps 1 & 2. This makes complete sense, but the approach taken is almost always the wrong one. I've seen many people try to use record & playback tools to open the control panel, click Add/Remove programs, and uninstall.
Now, conceptually, this shouldn't be a big deal, but when you figure that there are differences in the control panel in almost every version of Windows, your recorded script will break quite easily. So instead, a better option is to use command line flags to remove your application.
Almost all the major install building programs allow for the creation of command line parameters. This lets you install your app with a command like "myapp.exe /AcceptLicenseAgreement /InstallToDefaultLocation" They also usually have uninstall commands as well. Installing/uninstalling your app in this fashion makes it a lot easier to get new versions of your programs loaded for use with automated tests.
Talk with your build engineer to get a list of the commands available for your product, and if there aren't any, work with him or her to get some implemented. It will only make your life easier in the long run.
1 - Uninstall old version of application
2 - Install new build
3 - [Perform actual test here]
When people are starting out with test automation tools, they often want to automate steps 1 & 2. This makes complete sense, but the approach taken is almost always the wrong one. I've seen many people try to use record & playback tools to open the control panel, click Add/Remove programs, and uninstall.
Now, conceptually, this shouldn't be a big deal, but when you figure that there are differences in the control panel in almost every version of Windows, your recorded script will break quite easily. So instead, a better option is to use command line flags to remove your application.
Almost all the major install building programs allow for the creation of command line parameters. This lets you install your app with a command like "myapp.exe /AcceptLicenseAgreement /InstallToDefaultLocation" They also usually have uninstall commands as well. Installing/uninstalling your app in this fashion makes it a lot easier to get new versions of your programs loaded for use with automated tests.
Talk with your build engineer to get a list of the commands available for your product, and if there aren't any, work with him or her to get some implemented. It will only make your life easier in the long run.
Friday, August 7, 2009
LMGTFY
We've all had those moments where someone emails us a question that could've been answered with a simple Google search. Things like "What does IIS stand for?" "How many GB in a TB?" "What's the capital of Iceland?" (ok, I've never actually been asked that one, but you know what I mean).
For those moments when you're feeling just a little snarky at someone for throwing one of these questions your way, there's Let Me Google That For You. LMGTFY takes a question and creates a URL that you can email in response to the question. When your associate clicks the link, they'll see a vid of the question being typed into the Google home page and the Google Search button being clicked, followed by a message saying "Was that so hard?" Then the user is taken to the actual results of the search.
Here's a sample with the aforementioned "What's the capital of Iceland?" question. Enjoy!
For those moments when you're feeling just a little snarky at someone for throwing one of these questions your way, there's Let Me Google That For You. LMGTFY takes a question and creates a URL that you can email in response to the question. When your associate clicks the link, they'll see a vid of the question being typed into the Google home page and the Google Search button being clicked, followed by a message saying "Was that so hard?" Then the user is taken to the actual results of the search.
Here's a sample with the aforementioned "What's the capital of Iceland?" question. Enjoy!
Wednesday, August 5, 2009
Blind Faith
Your automated tests are great. They run unattended every night, and each morning, you come in and see a little report on your desktop with a bunch of green success messages. You feel good. You know those tests are making sure the basic functionality of your app is running smooth, and there hasn't been a failure in days.
Then you give the first build of the app to the test team. Within an hour, 30 high priority bugs have been logged. You review the bugs and see that close to half of them are scenarios that are covered by the automated tests. Stupid testers, you think. They must be doing something wrong. You kick off your automated suite, and it passes. Then you try to run the same test by hand, and it fails. You try it again by hand, and it fails again.
Then you take a look at the automated test, and realize there's a flaw in its logic. You see that the test is configured to always pass, regardless of what's actually happening. You quickly dive into the code and fix the application, and then you go back and fix the tests.
The moral of the story here is to make sure you know exactly what your automated tests are doing. Have them code reviewed just like any other part of your project. Just because a report gets spit out saying everything passed, doesn't mean everything did.
Then you give the first build of the app to the test team. Within an hour, 30 high priority bugs have been logged. You review the bugs and see that close to half of them are scenarios that are covered by the automated tests. Stupid testers, you think. They must be doing something wrong. You kick off your automated suite, and it passes. Then you try to run the same test by hand, and it fails. You try it again by hand, and it fails again.
Then you take a look at the automated test, and realize there's a flaw in its logic. You see that the test is configured to always pass, regardless of what's actually happening. You quickly dive into the code and fix the application, and then you go back and fix the tests.
The moral of the story here is to make sure you know exactly what your automated tests are doing. Have them code reviewed just like any other part of your project. Just because a report gets spit out saying everything passed, doesn't mean everything did.
Monday, August 3, 2009
Ninjas or Pirates? Both.
There's an age old debate on who would win in a fight, ninjas or pirates. But as I once saw in a Nodwick comic, if you create Ninja Pirates, then you've got something absolutely unstoppable.
For some reason, this got me thinking about when I hear people argue about whether it's worth it to exclusively do automated testing, or exclusive manual testing. On their own, both techniques are powerful, but you're going to find the most bugs when you combine the two techniques. So augment your manual tests with automated tools that will let your testers do things faster and more efficiently. The end result will be well worth it.
Plus, you can brag to your friends that you've got a bunch of Ninja Pirates working for you.
For some reason, this got me thinking about when I hear people argue about whether it's worth it to exclusively do automated testing, or exclusive manual testing. On their own, both techniques are powerful, but you're going to find the most bugs when you combine the two techniques. So augment your manual tests with automated tools that will let your testers do things faster and more efficiently. The end result will be well worth it.
Plus, you can brag to your friends that you've got a bunch of Ninja Pirates working for you.
Subscribe to:
Posts (Atom)