Our Testing the Limits “reunion tour” rolls on this month with Michael Bolton, back for another lively session of Q&A. Michael is best known as the founder of DevelopSense, his Toronto-based testing consulting firm, and as a leading figure in Rapid Testing and the Context-Driven school of testing. In short, he’s one of the industry’s most highly regarded writers, speakers and teachers – and it’s a real pleasure to have him back. For more on Michael, be sure to check out his website, blog or follow him on Twitter.
In part I of our healthy two-part interview, we get his thoughts on test cases not being related to testing; the sub-par debate skills of testers; the quality chain of command; objections to Rapid Testing and much more. Be sure to check back tomorrow for Part II. Enjoy!
uTest: It’s been almost two years since our last interview. Where does the time go? We’ve followed you pretty closely during that time (on Twitter, don’t worry), but for those who haven’t, what have they missed? New publications? New courses? New ideas on testing? What’s new with Michael Bolton?
MB: I’ve been traveling like crazy this year, and I’m booked pretty heavily through the end of the year. I’m beginning to set up my schedule for next year—so if people would like to schedule an in-house class, now is a great time to ask. For new publications How to Reduce the Cost of Testing, a new book edited by Matt Heusser and Govind Kulkarni, has just been released. I’m pleased to say that I’ve got a chapter in there, with a number of other members of our community.
I don’t specialize in new ideas in testing so much, but rather in refining and reframing ideas we’ve had for years in more specific and, I hope, more useful ways. The other thing that I love to do is to bring ideas from elsewhere into testing. Currently I’m fascinated by the work of Harry Collins, who studies the sociology of science and the ways in which people develop knowledge and skill. Tacit and Explicit Knowledge is his most recent book; The Shape of Actions is older. I’m most interested in the idea of repair, which is Collins’ notion for the ways in which people fix up information as they prepare to send it, or as they receive and interpret it.
As an example, I’m 5’ 8” tall. If I ask you how tall I am in centimeters (and provide you with the ratio of 2.54 centimeters to the inch), you’ll probably do a little math in your head to translate 5’ 8” into 68 inches. If you do that, it’s because you have tacit knowledge that a foot is 12 inches, and it’s quicker to do five times 12 in your head and add eight than to work it out on the calculator. Then you’ll report that I’m 173 centimeters (or 172), rather than what the calculator tells you: 172.72. If you round the answer up or down to a whole centimeter, it’s because you have tacit knowledge that the extra precision is useless when my height changes more than that with every breath. The calculator doesn’t know that, but people often fix up the interaction with the tool, applying that kind of tacit knowledge without noticing that they’re doing it. Collins argues that we give calculators and computers and machines more credit than they deserve when we ascribe intelligence or knowledge to them, even when we do it casually or informally.
My latest hobby horse is definitely not new, but I’d like to have a go at it anyway. I’d like to skewer the idea of the test case having any serious relationship to testing. Test cases are typically examples of what the product should do. That’s important; we often need examples to help explicate requirements and desires. But examples are not tests, so I’d like to call those artifacts example cases or examples rather than test cases. They’re confirmatory, not exploratory; checks, not tests. Brian Marick has written a lot about examples; Matt Heusser has too; so has Gojko Adzic. James Bach has been railing about test cases for a long time. Often test cases are overly elaborate, expensive to prepare and maintain. They’d be even more expensive if testers didn’t repair them on the fly, inserting subtle variations making observations that the test case doesn’t specify. Just as Collins suggests about machines, test cases get more credit than they deserve. As Pradeep Soundarajan would say, the test case doesn’t find the bug. The tester finds the bug, and the test case has a role in that. Now: the development of checks and the interpretation of checks—those things require all kinds of sapience and skill.
A test, to me, is an investigation, not a bit of input and output for a function. Yet people tend to think of testing in terms of test cases. Even worse, people count test cases; and even worse than that, they count passing and failing test cases to measure the completeness of project work or testing work. It’s like evaluating the quality of a newspaper by counting the number of stories in it without reference to the content, the quality of the writing, the quality of the investigation, the relevance of the report, whether a given article contains one story or a dozen, and so forth. Counting stories would be a ludicrous way of measuring either the quality of the newspaper or the state of the world. Yet, it seems to me, many development and testing organizations try to observe and evaluate testing in this completely shallow and ridiculous way. They do that because seem to think about things in terms of units of production. Learning, discoveries, threats to value, management responses… none of these things are widgets. They not things, either, for that matter.
uTest: In a recent blog post, you wrote about the inability of some testers to properly frame tests, mainly because they haven’t been asked to. Generally speaking, what other qualities or skills do you find testers to be lacking in?