Our Testing the Limits “reunion tour” rolls on this month with Michael Bolton, back for another lively session of Q&A. Michael is best known as the founder of DevelopSense, his Toronto-based testing consulting firm, and as a leading figure in Rapid Testing and the Context-Driven school of testing. In short, he’s one of the industry’s most highly regarded writers, speakers and teachers – and it’s a real pleasure to have him back. For more on Michael, be sure to check out his website, blog or follow him on Twitter.
In part I of our healthy two-part interview, we get his thoughts on test cases not being related to testing; the sub-par debate skills of testers; the quality chain of command; objections to Rapid Testing and much more. Be sure to check back tomorrow for Part II. Enjoy!
uTest: It’s been almost two years since our last interview. Where does the time go? We’ve followed you pretty closely during that time (on Twitter, don’t worry), but for those who haven’t, what have they missed? New publications? New courses? New ideas on testing? What’s new with Michael Bolton?
MB: I’ve been traveling like crazy this year, and I’m booked pretty heavily through the end of the year. I’m beginning to set up my schedule for next year—so if people would like to schedule an in-house class, now is a great time to ask. For new publications How to Reduce the Cost of Testing, a new book edited by Matt Heusser and Govind Kulkarni, has just been released. I’m pleased to say that I’ve got a chapter in there, with a number of other members of our community.
I don’t specialize in new ideas in testing so much, but rather in refining and reframing ideas we’ve had for years in more specific and, I hope, more useful ways. The other thing that I love to do is to bring ideas from elsewhere into testing. Currently I’m fascinated by the work of Harry Collins, who studies the sociology of science and the ways in which people develop knowledge and skill. Tacit and Explicit Knowledge is his most recent book; The Shape of Actions is older. I’m most interested in the idea of repair, which is Collins’ notion for the ways in which people fix up information as they prepare to send it, or as they receive and interpret it.
As an example, I’m 5’ 8” tall. If I ask you how tall I am in centimeters (and provide you with the ratio of 2.54 centimeters to the inch), you’ll probably do a little math in your head to translate 5’ 8” into 68 inches. If you do that, it’s because you have tacit knowledge that a foot is 12 inches, and it’s quicker to do five times 12 in your head and add eight than to work it out on the calculator. Then you’ll report that I’m 173 centimeters (or 172), rather than what the calculator tells you: 172.72. If you round the answer up or down to a whole centimeter, it’s because you have tacit knowledge that the extra precision is useless when my height changes more than that with every breath. The calculator doesn’t know that, but people often fix up the interaction with the tool, applying that kind of tacit knowledge without noticing that they’re doing it. Collins argues that we give calculators and computers and machines more credit than they deserve when we ascribe intelligence or knowledge to them, even when we do it casually or informally.
My latest hobby horse is definitely not new, but I’d like to have a go at it anyway. I’d like to skewer the idea of the test case having any serious relationship to testing. Test cases are typically examples of what the product should do. That’s important; we often need examples to help explicate requirements and desires. But examples are not tests, so I’d like to call those artifacts example cases or examples rather than test cases. They’re confirmatory, not exploratory; checks, not tests. Brian Marick has written a lot about examples; Matt Heusser has too; so has Gojko Adzic. James Bach has been railing about test cases for a long time. Often test cases are overly elaborate, expensive to prepare and maintain. They’d be even more expensive if testers didn’t repair them on the fly, inserting subtle variations making observations that the test case doesn’t specify. Just as Collins suggests about machines, test cases get more credit than they deserve. As Pradeep Soundarajan would say, the test case doesn’t find the bug. The tester finds the bug, and the test case has a role in that. Now: the development of checks and the interpretation of checks—those things require all kinds of sapience and skill.
A test, to me, is an investigation, not a bit of input and output for a function. Yet people tend to think of testing in terms of test cases. Even worse, people count test cases; and even worse than that, they count passing and failing test cases to measure the completeness of project work or testing work. It’s like evaluating the quality of a newspaper by counting the number of stories in it without reference to the content, the quality of the writing, the quality of the investigation, the relevance of the report, whether a given article contains one story or a dozen, and so forth. Counting stories would be a ludicrous way of measuring either the quality of the newspaper or the state of the world. Yet, it seems to me, many development and testing organizations try to observe and evaluate testing in this completely shallow and ridiculous way. They do that because seem to think about things in terms of units of production. Learning, discoveries, threats to value, management responses… none of these things are widgets. They not things, either, for that matter.
uTest: In a recent blog post, you wrote about the inability of some testers to properly frame tests, mainly because they haven’t been asked to. Generally speaking, what other qualities or skills do you find testers to be lacking in?
MB: Oh, dear, it’s so sad because there are so many gaps. James Bach, in a recent chat with you, identified rhetoric—how to speak and write clearly, articulately, and precisely—as something that many testers are missing. Many testers aren’t so good at developing an argument (in the sense of a line of reasoning rather than a fight). Many testers see obstacles to their work as problems for testing, when in fact they’re pretty much always problems for the project. Testing helps to reveal those problems if you have an appropriate mind set. (I wrote about that recently here.) It seems that many testers, like many programmers, often lapse into the binary fallacy—something is either one thing or another, yes or no, true or false, pass or fail.
Tom Waits put it beautifully in an interview a couple of years back, when someone asked him about how he finds the truth of his character. “Truths,” he said. “Truth isn’t a word that should be used in the singular. It should always be used in the plural.” Rob Sabourin has been observing gaps in testers’ command of math, and how to apply it in testing. As a craft, we seem to be aiding and abetting management in bad measurement; we need more people to study how to do measurement well. At the Pacific Northwest Software Quality Conference, I was delighted to see Kristina Sitarski—who studied anthropology in university—deliver an excellent analysis of interactions between herself and a tester she was pairing with, focusing on learning and perceptual styles. If we want to be well regarded as a craft, we need more reports like this, based on studies of what we actually do. We don’t need any more of the inept works of fiction and fantasy that you see in neo-Platonist process manuals.
uTest: We see that you were disappointed in the New Yorker iPad app, tweeting something to the effect of “it might have been tested, but it wasn’t fixed.” In your view, is quality lacking in the mobile space? If so, why? Is mobile inherently more of a testing challenge than its web and desktop cousins?
MB: When I tweeted that, the New Yorker app was crashing for me within a few moments of starting up. I had to wait a few weeks while that got sorted out. The magazine as delivered to the iPad is typically on the order of 150MB in size, where (comparable product heuristic!) The Economist is on the order of 3MB. The product would frequently crash during a download, and wouldn’t pick up where the download had left off. Even if downloads were successful, downloading the New Yorker every week would wipe out the base amount of data consumption on my mobile billing plan. It would be cheaper to buy the dead tree version of the magazine.
Certainly there’s a great deal of extra complexity to be dealt with in the mobile space, when we look at the number of different systems and functions through which a given bit of data passes, or the enormous number of platforms on which people want to run apps. Before Windows came along to abstract the hardware, there were drivers for each video card, each printer, each mouse, each network card times each operating system. But then each application program came with special drivers to talk to each kind of hardware. Developing and supporting all that stuff was completely nuts. Since there’s a perception of lots of opportunity and lots of money in the mobile space, there’s a gold rush and lots of people are heading for the Klondike. Now there are competing mobile OSs, times all those versions of those OSs, times all those handsets and tablets and mobile browser versions and interconnecting apps and services. So in a way, we’re back to the late 80s and early 1990s, back in the DOS days, when I first got involved with programming and support and testing. Hey you kids, get out of my yard!
uTest: You’ve said before that “decisions about quality are inherently subjective” and that “testers are not responsible for making decisions about quality, but rather for informing decisions about quality.” So our subjective question for you is this: Should testers be responsible for making decisions about quality?
MB: Like everyone else, testers should be responsible for making decisions about the quality of their own work. But we already have lots roles and titles for people who make decisions about the work of other people: we call them “product manager”, “program manager”, “project manager”, “product owner”, “director of development”, “vice president”, “CEO”. I urge testers: You want to manage a project? Become a project manager. I urge quality assurance people: You want to assure quality? Make sure you have real, final authority over the product and the people who produce it. That is, become a manager. You’re not a gatekeeper of quality; you’re a speed bump on the road to quality. (Speed bumps are also known as “sleeping policemen”. That’s apt.)
uTest: You’ve been traveling the world the last few years teaching courses in Rapid Software Testing. In your experience, are certain regions of the world more open to this testing mind-set than others? If so, why do you think this is the case?
MB: It seems that Rapid Software Testing has a lot of traction in certain circles in northwestern Europe, especially Sweden, places where there seems to be a deal of room for intellectual rigor and independence at the same time. Rikard Edgren had an interesting explanation for the success of rapid and context-driven testing there. He said that the Swedes in particular believe really strongly in the social contract, that people are interdependent, that society should take care of everyone, that things like education and health care are rights, not privileges, and that people should reasonably expect get them at a high level of quality. And to pay taxes for them. Although, he said, the Swedes aren’t crazy; no one likes paying taxes, but it’s part of the deal if you want all this other good stuff. So there’s this sense of mutual support and strong government, and the Swedes (broadly speaking, of course) believe in that… but they don’t like other people telling them what to do. When Rikard say that, I thought it fit really well with what we espouse: We’re all in this together. We give ourselves and each other freedom to do the right thing and to screw up. We also take responsibility for our actions, and we take responsibility for taking care of each other. We work collaboratively, but we recognize that few people like being under someone else’s thumb. That kind of freedom combined with responsibility allows people to blossom, I think. I’d argue the Baltic countries have been ahead of North America for a while in terms of politics and social issues. Rapid Testing is popular in New Zealand and Australia too, to some degree. It’s that spirit of independent interdependence, if you will. James is excited about Estonia, too, but I haven’t been there. Yet. The UK has this emerging group too. So we’re seeing shoots coming up through the snow.
uTest: In our last interview, you mentioned specifically that New Zealand and Scandinavia were producing some excellent testers with fresh insight and new ideas. Have your travels uncovered any other areas of testing innovation?
MB: When I do, they’re almost always local geographical pockets, or some skunkworks when they’re inside larger organizations. Steve Green runs this cool little testing services company in England that focuses on skilled testers and very fast turnaround. Paul Holland has been doing rapid testing for years at Alcatel-Lucent, with excellent results. Pradeep Soundarajan is running a testing services company in India that specializes in rapid testing approaches. That company is profitable in its first year. Darren McMillan has done some really great work in explaining the ways in which he’s been using mind mapping. Those are only some of the prominent people. Alas, NDAs, company confidentiality policies, modesty, and fear restrain people from saying too much about what is and isn’t working.
In addition, everyone does rapid and exploratory work to some degree, but I’ve never seen process enthusiasts who have actually observed processes closely enough to notice that. If you want to observe process, you have to observe people. Most process enthusiasts I’ve seen observe artifacts, rather than real work in action. The Social Life of Information talks about how quickly we could find blatant errors in our process models if only we took a more diversified anthropological approach. Don’t get me wrong; I’m at best a dilettante in that stuff myself. But I think we have to start getting serious about it.
Editor’s note: That’s it for now. Be sure to check back for Part II tomorrow!