Yesterday we posted Part 1 of our interview with James Bach, where he discussed tester certifications, faking test projects, his latest book and wide range of other topics (including life as a freelance sentry in a parallel universe). Today, for Part 2, we discuss tips for automated checking, what makes a good tester a great tester, his flying lessons and much more. Enjoy!
uTest: Do you see the quality of resources in the testing field increasing or decreasing (tools, training, certs, et al)? What do you think are some of the drivers of that change?
JB: There are many good resources out there, and yes there are resources getting better. There’s testingeducation.org and the Weekend Testers project, to name two. At the same time there are terrible things out there (such as certification and all the stupidity that goes with that). You have to be a smart consumer, because it seems to me that the bad stuff has always outweighed the good stuff by an order of magnitude or so. Maybe by two orders of magnitude.
uTest: When it comes to automated checking, what are some of the key opportunities to employ it that generally generate a positive ROI? Are there any good rules of thumb that can be used, i.e. if you plan on executing the same test 7 times, then it is a candidate (understanding of course that some assumptions need to be made to answer this)?
JB: Here’s how I think of it:
- Is the product highly controllable and observable? A command line tool that provides its output solely to the console window is inexpensive to automate, compared to an iPod touchscreen app. I want to get under the GUI.
- How expensive is the tool I’m using? I urge you not to use expensive tools, even if they work. Never let your manager buy them. Because expensive tools become something you MUST use, even if they don’t work. A free tool may be freely abandoned. This gives you flexibility.
- How well can I automate the oracle? Will the bugs be able to elude my automation because it can’t tell if a complex graphic is rendered correctly?
- What is the learning and testing value I’m giving up by using automated checks? I find that doing a test multiple times also causes me to learn and see new things in the product. Furthermore, when I re-run tests, I often run them in a different way, and that allows me to find new bugs.
- Can the automated check be parameterized and randomized, so that I get lots of similar checks for very little additional investment? I like automation more for data intensive testing, because I get new tests just by changing the database.
- Is the technology “Pyramid shaped?” In some products lot of underlying code boils up to one simple output, by placing checks on that output, we may be able to find lots of bugs. In other products, there are many different pathways, and you need a lot more checks to get decent coverage.
- How critical are the checks to the business? Is this critical functionality? Is it a common usage scenario? There are candidates for smoke testing.
- Is this part of the product especially prone to breaking? If so, that may be good for automation, UNLESS, it breaks in a way that breaks the automation.
- When I automate, I do it incrementally, in small bits.
I want automated checks for high value, highly testable parts of the product, and I want to do them in such a way as they aren’t constantly breaking or giving me false readings. I want to augment those checks periodic sapient testing as a cross-check.
uTest: What characteristics and practices make for a good tester? How about a great tester?