Testing the Limits With Michael Bolton – Part I

Our Testing the Limits “reunion tour” rolls on this month with Michael Bolton, back for another lively session of Q&A. Michael is best known as the founder of DevelopSense, his Toronto-based testing consulting firm, and as a leading figure in Rapid Testing and the Context-Driven school of testing. In short, he’s one of the industry’s most highly regarded writers, speakers and teachers – and it’s a real pleasure to have him back. For more on Michael, be sure to check out his website, blog or follow him on Twitter.

In part I of our healthy two-part interview, we get his thoughts on test cases not being related to testing; the sub-par debate skills of testers; the quality chain of command; objections to Rapid Testing and much more. Be sure to check back tomorrow for Part II. Enjoy!

uTest: It’s been almost two years since our last interview. Where does the time go? We’ve followed you pretty closely during that time (on Twitter, don’t worry), but for those who haven’t, what have they missed? New publications? New courses? New ideas on testing? What’s new with Michael Bolton?

MB: I’ve been traveling like crazy this year, and I’m booked pretty heavily through the end of the year.  I’m beginning to set up my schedule for next year—so if people would like to schedule an in-house class, now is a great time to ask.  For new publications How to Reduce the Cost of Testing, a new book edited by Matt Heusser and Govind Kulkarni, has just been released. I’m pleased to say that I’ve got a chapter in there, with a number of other members of our community.

I don’t specialize in new ideas in testing so much, but rather in refining and reframing ideas we’ve had for years in more specific and, I hope, more useful ways. The other thing that I love to do is to bring ideas from elsewhere into testing.  Currently I’m fascinated by the work of Harry Collins, who studies the sociology of science and the ways in which people develop knowledge and skill. Tacit and Explicit Knowledge is his most recent book; The Shape of Actions is older.  I’m most interested in the idea of repair, which is Collins’ notion for the ways in which people fix up information as they prepare to send it, or as they receive and interpret it.

As an example, I’m 5’ 8” tall.  If I ask you how tall I am in centimeters (and provide you with the ratio of 2.54 centimeters to the inch), you’ll probably do a little math in your head to translate 5’ 8” into 68 inches.  If you do that, it’s because you have tacit knowledge that a foot is 12 inches, and it’s quicker to do five times 12 in your head and add eight than to work it out on the calculator.  Then you’ll report that I’m 173 centimeters (or 172), rather than what the calculator tells you:  172.72.  If you round the answer up or down to a whole centimeter, it’s because you have tacit knowledge that the extra precision is useless when my height changes more than that with every breath. The calculator doesn’t know that, but people often fix up the interaction with the tool, applying that kind of tacit knowledge without noticing that they’re doing it.  Collins argues that we give calculators and computers and machines more credit than they deserve when we ascribe intelligence or knowledge to them, even when we do it casually or informally.

My latest hobby horse is definitely not new, but I’d like to have a go at it anyway.  I’d like to skewer the idea of the test case having any serious relationship to testing.  Test cases are typically examples of what the product should do. That’s important; we often need examples to help explicate requirements and desires. But examples are not tests, so I’d like to call those artifacts example cases or examples rather than test cases. They’re confirmatory, not exploratory; checks, not tests. Brian Marick has written a lot about examples; Matt Heusser has too; so has Gojko Adzic. James Bach has been railing about test cases for a long time.  Often test cases are overly elaborate, expensive to prepare and maintain.  They’d be even more expensive if testers didn’t repair them on the fly, inserting subtle variations making observations that the test case doesn’t specify.  Just as Collins suggests about machines, test cases get more credit than they deserve.  As Pradeep Soundarajan would say, the test case doesn’t find the bug.  The tester finds the bug, and the test case has a role in that.  Now: the development of checks and the interpretation of checks—those things require all kinds of sapience and skill.

A test, to me, is an investigation, not a bit of input and output for a function.  Yet people tend to think of testing in terms of test cases.  Even worse, people count test cases; and even worse than that, they count passing and failing test cases to measure the completeness of project work or testing work.  It’s like evaluating the quality of a newspaper by counting the number of stories in it without reference to the content, the quality of the writing, the quality of the investigation, the relevance of the report, whether a given article contains one story or a dozen, and so forth.  Counting stories would be a ludicrous way of measuring either the quality of the newspaper or the state of the world. Yet, it seems to me, many development and testing organizations try to observe and evaluate testing in this completely shallow and ridiculous way. They do that because seem to think about things in terms of units of production. Learning, discoveries, threats to value, management responses… none of these things are widgets. They not things, either, for that matter.

uTest: In a recent blog post, you wrote about the inability of some testers to properly frame tests, mainly because they haven’t been asked to. Generally speaking, what other qualities or skills do you find testers to be lacking in?

Continue Reading

Essential Guide to Mobile App Testing

Get Ready To Taste, I Mean Test, Ice Cream Sandwich

I’m talking about the Android Ice Cream Sandwich (ICS) – the fourth major Android OS version – which is growing closer to its release! Google is urging developers and testers alike to get ready for it, so consider yourselves forewarned. For now, what’s most important is to make sure your apps work on large screens AND small screens as this “cool” release is going to run on both tablets and smartphones.

According to CNET:

“Developers who created their apps specifically to run on Honeycomb-based tablets will need to tweak their APKs (Android packages) to either prevent or support their installation on smaller-screen devices.

The [Google Android developers] blog also offered some recommendations for tablet app developers on how to ensure that their design of the Action Bar widget works on smaller handsets.”

Continue Reading

Essential Guide to Mobile App Testing

uTest Goes BIG at TechCrunch Disrupt

As you may have read on Monday’s blog post, uTest launched a new informational campaign to promote http://www.inthewildtesting.com.   The web site – and associated social media channels, including a Twitter profile – are intended to educate forward-thinking technology leaders about the necessity, benefits and real use cases of in-the-wild testing. 

We decided to launch it at TechCrunch Disrupt in San Francisco because the very concept of in-the-wild software testing (versus traditional methodologies) is, well…disruptive. 

Sure enough, TechCrunch Disrupt turned out to be the perfect event!  There were more than 2,600 innovative, entrepreneurial-minded techies, investors and exhibitors (35% more attendees than expected) filling the halls of the Design Concourse Center from Monday to Wednesday.  In its usual fashion, the conference itself attracted top industry-leaders such as Reid Hoffman of LinkedIn, Marissa Mayer of Google, Vinod Khosla, and even Ashton Kutcher.

uTest hosted a ton of terrific activities over the course of the event:

Continue Reading

Essential Guide to Mobile App Testing

uTest & Veracode Join Forces To Protect Against Security Breaches

Every few weeks, it seems like there’s another major security breach to the website, gaming system or native app of a big global brand.  And that doesn’t even include the hundreds (thousands?) of hacks into the properties of smaller enterprises, SMBs and startups that consumers may (or may not) hear about.

In fact, a few months ago we wrote about The Top Security Hacks of 2011, and referenced that the attacks on Playstation were estimated to have cost Sony $24 billion dollars– nearly 10x their revenue for the same period.

So here’s the point: Would you rather look back and say your company overshot and used too many systems for security testing?  Or get that nauseaus, sinking feeling in your gut when your CIO wakes you at 2:00am and says the company has spent too little?

That’s why– as the cornerstone of uTest’s showstopping announcement yesterday– we announced the launch of uTest Security Testing that leverages the talents of new and existing white hat security professionals within our crowdsourced community.  Since we now offer the first crowdsourced, real-world security testing in the world…there’s a new kid in town to join the collective effort to protect your company, and customers’, private data.

Moreover, we’ve joined forces with industry leader Veracode to provide seamless access to their complementary, cloud-based application security verification services.  Veracode has scalable, policy-driven application risk management programs that help identify and eradicate numerous vulnerabilities by leveraging best-in-class technologies from vulnerability scanning to penetration testing and static code analysis.

As a result, companies will have access to a cost-effective, powerful combination of automated (Veracode) and real-world (uTest) testing that mitigates security risks across the entire software development lifecycle.

We’re thrilled, honored and excited to be partnering with Veracode.  And we’re certain that our joint offering– as a complement to organizations’ in-house security testing– will offer tech executives peace-of-mind at a price with infinitely fewer zeroes than $24,000,000,000.

Essential Guide to Mobile App Testing

The Silver Lining to Motorola’s Comments on Android

Over the past week, there’s been some hub bub over comments made by Motorola’s CEO Sanjay Jha.  According to IDG News Service, Jha “blamed the open Android app store for performance issues on some phones,” based on his statement: “Of all the Motorola Android devices that are returned, 70 percent come back because applications affect performance.”

Even though Motorola formally stated today (see MoCoNews article) that Jha’s comments were essentially misconstrued and didn’t accurately reflect his intentions, the issue has remained a lightning rod for debate.

But for those of us in the software testing community, there’s a truly, positive message embedded in this issue:  Motorola was validating the critical importance of QA testing in the app development process.  

After all, consider Jha’s statement that, “one of the good and problematic things about Android is that it’s very very open. So anyone can put applications, third-party apps, on the market without any testing process….For power consumption, CPU utilization, some of those things, those applications are not tested. We’re beginning to understand the impact that has.”

For professional software testers, that confirms how important our work is, and actually suggests that the scope of mobile testing should be expanded.

Essentially, Jha wasn’t really referring to functional testing.  Or testing exclusively in the “clean and ideal” conditions of a lab environment.  Instead, he was describing the need for usability testing in the real-world to subjectively examine how apps and devices perform in live conditions and affect the user experience.  For instance, did the app run sluggishly?   Did it seriously tax the battery life?  These are vital questions, particularly for apps heavy on audio and video. 

At the end of the day, consumers are unlikely to differentiate whether their frustration over poor performance is caused by the smartphone or the app…or the interaction of both.  They just want to have a great experience with their new mobile “toy” or get their work done. 

Because if there isn’t enough testing on every device that the app is developed for, then (as Jha said) the smartphone gets returned and everyone– including the app publisher–loses out.

Essential Guide to Mobile App Testing