We’re thrilled to have Michael Bolton as the latest victim of our Testing the Limits series. As the founder of DevelopSense, Michael has traveled the world teaching the craft of software testing to businesses and individuals alike. Since 2005, he has specialized in courses on Rapid Software Testing – which he co-authored with James Bach. Michael is also a prolific writer, and his publications include hundreds of articles, essays and columns. Aside from his blog, you can keep tabs on his latest work through Twitter.
In Part I of the “trilogy” we discuss the Weekend Testers, testing abroad, how numbers can enslave managers, and of course, his pop-star namesake.
uTest: You’ve been a thought leader in the testing space for a while now, but people still seem to get you confused with Michael Bolton (the singer) on Twitter. Ever thought about creating a tester alias? Or have you considered asking him to change his name since “he’s the one that sucks.” Assuming you (and our readers) have seen Office Space, I bet this joke never gets old.
MB: Yeah, it never gets old. Try renting a car with this name.
A couple of things on that. First, Office Space captures very well what it’s like to have my name. Second, it’s not his real name; he changed it already. Way back when, before Office Space, I was working in tech support at Quarterdeck Canada. American callers would occasionally turn north when there were long phone queues in Santa Monica. On one call, when I introduced myself to the customer, he laughed. “Really? That’s your real name?” “Yes, really,” I said, expecting one of the usual jokes. He said, “You know, it isn’t his real name. I used to be his bass player.” The singer’s real name is Bolotin, but according to the bass player, there was no hope that radio DJs would ever pronounce “Bolotin” right, so he changed it.
uTest: We recently interviewed your friend and colleague James Bach, who had high praise for a group called the Weekend Testers. Can you give our readers a quick recap of what this group does, and whether or not you’re on board with their testing philosophy?
MB: James hadn’t heard about Weekend Testers (or maybe he had heard, but it hadn’t registered) until I raved about them to him. When I was in India in November 2009, I attended a talk by Ajay Balamurugadas, who told an amazing story. They’re a bunch of fairly young testers from Bangalore. The core organizers (Ajay plus Sharath Byregowda, Manoj Nair, and Parimala Shankaraiah) are students of our colleague Pradeep Soundararajan. They gather online every week to spend an hour or so testing some freeware or some Web service, and then they spend an hour or so debriefing each other.
There’s a weekly leader who sets a mission for the session. Sometimes the focus is on finding bugs quickly; sometimes it’s on choosing a particular approach, sometimes it’s on note taking. In the talk, Ajay described testers taking responsibility for their own skills development; testers building a self-critical community; testers overcoming obstacles like power cuts and buggy messaging tools; testers ignoring meaningless certifications; testers providing service to the open source development community; testers discovering testing for themselves, owning it.
At the end of the presentation, I whooped, stood up and applauded like I was at a rock concert. At Q & A time, I raised my hand to say that I didn’t have a question, but that this was the most exciting talk on testing I had seen in years, maybe ever. At the time of the presentation, there were already chapters forming in other cities in India. At the beginning of this year, Markus Gartner and Anna Baik announced a European chapter. It’s spreading!
Am I on board with their philosophy? Hell yeah! These people, and people like them, are the future of skilled testing.
uTest: It looks like you’ve been “testing abroad” for quite some time now. What’s the biggest thing you’ve learned about testing in locales other than the US and Canada (your two primary residences)?
MB: Scandinavia—Sweden in particular—and New Zealand seem to be percolating more excellent testing than other places. Meanwhile, I observe that many Western firms—mostly the Americans—are making life difficult for testers in India and other developing countries. These firms didn’t know much about testing to begin with. They viewed it as a rote, clerical activity, piecework, checking work, commodity work that delivered little value. They knew how to do checking, sort of, very slowly and very expensively. But they didn’t know how to do testing, or how to increase its value, so they focused on cost and outsourced it.
The developing countries have millions of intelligent people who want to develop skills, but the West is still requiring these overprescribed, expensive, low-value, confirmatory approaches in which smart human testers are being asked to behave like slow, dumb machines. Confirmatory tests do find problems, but to a great degree, programmers should be pairing and low-level automated tests to squash those problems before testers ever see them. Then free up the testers to look for higher-level problems and previously unanticipated risks.
uTest: Your testing philosophy seems to draw a lot of heat from some circles. For instance, a lot of people seem to think you’re an anti-numbers guy (as you mentioned recently on our blog). What is it these people are so opposed to? Or are they simply misinterpreting your approach to testing?
MB: Excellent testing, the way we teach it, involves thinking critically about the product and developing a story about it. In order to do that expertly, we have to think critically about how we’re getting that story and how we’re telling it in ways that are important to our clients. Testing is about trying to make sure that our clients aren’t being fooled. If we’re fooling ourselves, we’re likely miss important problems.
When I object to people using numbers badly or using numbers excessively, some people perceive that I object to using numbers at all – not so. It’s just that I love numbers so much that I hate to see the poor things abused.
Some people enslave numbers. They make numbers work too hard, and too often. I’ve seen organizations collect piles of data about defect escape ratios and defect detection percentages. They hire market research firms and calculate the ratio of happy customers to unhappy customers. But the aggregated data doesn’t tell you anything specific on how to make things better for the unhappy customers.
When you enslave numbers, they eventually rise up, revolt, and enslave you. These organizations spend so much time collecting the data and talking about it and justifying it and trying to duck blame that they don’t seem to have time to do anything about the actual problems, which generally fall into two categories. One: the organizations are trying to do more work than they can handle with the approaches they’re using. Two: they’re not listening to people that matter—neither to their customers, nor to their own front-line staff, many of whom are closest to the customers. VPs could learn a ton of useful business information from their own customer service and technical support reps, and they could learn plenty about the project by listening to their programmers and their testers. Product and project knowledge gets mediated by middle managers and numbers; it turns from information into data. When your car is about to go off a cliff, it’s a weird time to be thinking about gas mileage and drag coefficients; better to take the right control action—look out the window and steer or use the brake until you’re back on course. Once you’re back to being productive, then you can start thinking about optimizing, if you think you need to. I wrote about that here and here .
Besides, people don’t decide things based on numbers anyway. They decide based on how they feel about the numbers. On their own, numbers don’t tell you what is or isn’t okay. Your judgment does that. Your judgment is governed by the synthesis of observations and inferences and facts and feelings, in your head and in your gut. Effective decisions require both head and gut. Each can mislead the other, which ends up with someone being fooled, or worse. Neuroscientists and psychologists and economists (people like Antonio Damasio, Dan Ariely, Daniel Kahneman and the late Amos Tversky, Daniel Gilbert, Gerd Gigerenzer, Daniel Goleman) have been looking into each others’ fields to discover fascinating stuff about the links between emotion and intelligence. (If you want to do research in that area, apparently it helps to be named Dan.)
Some people seem genuinely scared by the human issues and the uncertainty that’s inherent in software development. They want testing to be this simple technical problem, a confirmatory thing. Questions related to exploration and discovery and investigation and learning emphasize the fact that we don’t know everything from the outset when we’re dealing with complicated systems. We can’t. We prepare what we think are the requirements, but we often don’t understand what we want until we can compare it to something we’ve got. We have to live in this messy, uncertain, continuously changing world that we can’t control very well.
In that kind of world, repeatability isn’t that big a deal; computers are good at that. The big deal is adaptability; can our software help people to solve their problems, not only reliably but also flexibly? Designing software to do that, putting all the design work up front, is difficult—I’d say impossible. But we can do a bit of focused work, gather information (that is, test), and then tune things where they need tuning, add things where they need to be added, drop them where they’re not needed any more. Then we repeat the cycle often, with lots of little variations. That’s what Agile development is about. That’s what evolution is all about, really. In that kind of world, testing is an important service, and the kind of testing that James and I teach is designed to make that service sophisticated and powerful and fast. I think the need for testing annoys people who think everything should be known in advance. Alas, I believe that there are lots of people, even many testers, who think like that.
Editor’s note: Check out Part II of the interview.