When it comes to finding defects in web applications, timing is everything. A bug found in production, for instance, will generally complicate matters more than a bug found in pre-production. This is also true of uTest projects, when testing is sometimes compressed into a shorter (than ideal) time-frame.
So to help you find the right bugs at the right time, we suggest checking out the latest article from our old friend Matt Heusser. In Seven Ways to Find Software Defects Before They Hit Production, Matt shares some valuable tips on ways to reinvigorate your testing. Here are three techniques that I think uTesters will find especially useful. Enjoy!
Technique 1: Quick Attacks
If you have little or no prior knowledge of a system, you don’t know its requirements, so formal techniques to transform the requirements into tests won’t help. Instead, you might attack the system, looking to send it into a state of panic by filling in the wrong thing.
If a field is required, leave it blank. If the user interface implies a workflow, try to take a different route. If the input field is clearly supposed to be a number, try typing a word, or try typing a number too large for the system to handle. If you must use numbers, figure out whether the system expects a whole number (an integer), and use a decimal-point number instead. If you must use words, try using the CharMap application in Windows (Start > Run > charmap) and select some special characters that are outside the standard characters accessed directly by keys on your keyboard.
Technique 3: Common Failure Modes
Remember when the Web was young and you tried to order a book or something from a website, but nothing seemed to happen? You clicked the order button again. If you were lucky, two books showed up in your shopping cart; if you were unlucky, they showed up on your doorstep. That failure mode was a kind of common problem—one that happened a lot, and we learned to test for it. Eventually, programmers got wise and improved their code, and this attack became less effective, but it points to something: Platforms often have the same bug coming up again and again.
For a mobile application, for example, I might experiment with losing coverage, or having too many applications open at the same time with a low-memory device. I use these techniques because I’ve seen them fail. Back in the software organization, we can mine our bug-tracking software to figure out what happens a lot, and then we test for it. Over time, we teach our programmers not to make these mistakes, or to prevent them, improving the code quality before it gets to hands-on exploration.
Technique 5: Use Cases and Soap Opera Tests
Use cases and scenarios focus on software in its role to enable a human being to do something. This shift has us come up with examples of what a human would actually try to accomplish, instead of thinking of the software as a collection of features, such as “open” and “save.” Alistair Cockburn’s Writing Effective Use Cases describes the method in detail, but you can think of the idea as pulling out the who, what, and why behaviors of system users into a description before the software is built. These examples drive requirements, programming, and even testing, and they can certainly hit the highlights of functional behavior, defining confirmatory tests for your application that you can write in plain English and a customer can understand.
Scenarios are similar, in that they can be used to define how someone might use a software system. Soap opera tests are crazy, wild combinations of improbable scenarios, the kind you might see on a TV soap opera.
Please note that the excerpts above are just that, excerpts. As you’d expect from Heusser, he goes into great detail on each technique, citing strengths and weaknesses, as well as tips for each optimizing each approach. In other words, read the entire article.