5 Myths of Software Testing

As I scan the software testing stories of the day, I’m amazed at the frequency of certain misconceptions. While there are too many to list, I wanted to share five of the most common testing myths (in my brief experience). The first three I find to be prevalent in mainstream news articles, while the other two are more common within the tech industry in general.

Take a look and see if you agree with me.

Myth 1. Testing is boring: It’s been said that “Testing is like sex. If it’s not fun, then you’re doing it wrong.” The myth of testing as a monotonous, boring activity is seen frequently in mainstream media articles, which regard testers as the assembly line workers of the software business. In reality, testing presents new and exciting challenges every day. Here’s a nice quote from Michael Bolton that pretty much sums it up:

“Testing is something that we do with the motivation of finding new information.  Testing is a process of exploration, discovery, investigation, and learning.  When we configure, operate, and observe a product with the intention of evaluating it, or with the intention of recognizing a problem that we hadn’t anticipated, we’re testing.  We’re testing when we’re trying to find out about the extents and limitations of the product and its design, and when we’re largely driven by questions that haven’t been answered or even asked before.”

Myth 2. Testing is easy: It’s often assumed testing cannot be that difficult, since everyday users find bugs all the time. In truth, testing is a very complex craft that’s not suited for your average Joe. Here’s Google’s Patrick Copeland on the qualities of a great tester:

Continue Reading

Essential Guide to Mobile App Testing

Testing Roundtable: What’s the Biggest Weakness in the Way Companies Test?

This month, in place of our standard Testing the Limits interview, we decided to hit up a few of our past guests for a “testing roundtable” discussion. The topic: What is the biggest weakness in the way companies test software? Below are some extremely insightful answers from testing experts Michael Bolton, James Bach, Noah Sussman, Dan Bartow, Rex Black, Jim Sivak and Cem Kaner. Enjoy!

*********************

Michael Bolton, Principal at DevelopSense:

So far as I can tell, most companies treat software development as implementation of highly idealized business processes, and they treat testing as an exercise in showing that the software models those processes in a way that’s technically correct. At the same time, companies treat the people who use the software as an abstraction. The consequence is that we’re creating software that delays and frustrates the people who use it or are affected by it. When testing is focused almost entirely on checking the functions in the software, we miss enormous opportunities to learn about the real problems that people encounter as they go about their business. Why are testers so often isolated from actual end-users?

Today I was traveling through the airport. When I checked in using the online service, I had accidentally noted that I’d be checking two bags, but I only brought one with me. In addition, my flight was cancelled, and I had to be put on a later flight. The customer service representative could get me onto that flight, but she had serious trouble in printing a boarding pass associated with only one bag; apparently there was a warning message that couldn’t be dismissed, such that her choices were to accept either three bags or none at all. It took fifteen minutes and two other representatives to figure out how to work around the problem. What’s worse is that the woman who was trying to help me apologized for not being able to figure it out, as if it were her responsibility. Software development organizations have managed to convince our customers that they’re responsible for bugs and unforgiving and unhelpful designs.

The success of a software product is only partly based on how it handles the happy path. That’s relatively easy to develop, and it’s relatively easy to check. Real testing, to me, should be based on investigating how the software allows people to deal with what we call “exceptions” or “corner cases”. That’s what we call them, but if we bothered to look, we’d find out that they were a lot more common than we realize; routine, even. Part of my vision of testing is to include a new discipline in which we do significant field research and participant observation. Instead of occasionally inviting customers to the lab (never mind sitting in the lab all by ourselves), we testers—and our organizations—could learn a lot through direct interaction with people who use the software every day; by close collaboration with technical support; and by testing rich and complex scenarios that are a lot closer to real life than simplified, idealized use cases.

*********************

James Bach, Author and Consultant, Satisfice:

There is a cluster of issues that each might qualify as the biggest weakness. I’ll pick one of those issues: chronic lack of skill, coupled with the chronic lack of any system for acquiring skill.

Pretty good testing is easy to do (that’s partly why some people like to say “testing is dead”– they think testing isn’t needed as a special focus because they note that anyone can find at least some bugs some of the time).

Excellent testing is quite *hard* to do.

Yet as I travel all over the world, teaching testing and consulting in testing organizations, I see the same pattern almost *everywhere*: testing groups who have but a vague, wispy idea what they are trying to do; experienced testers who barely read about and don’t systematically practice their craft beyond the minimum needed to keep their employers from firing them; testers whose practice is dominated by irrational and ignorant demands of their management, because those testers have done nothing to develop their own credibility; programmers who think their automated checks will save them from disaster in the field.

How does one learn to test? You can’t get an undergraduate degree in testing. I know of two people who have a PhD in testing, one of whom I admire (Meeta Prakash), the other one is, in my view, an active danger to himself and the craft. I personally know, by name, about 150 testers who are systematically and diligently improving their skills. There are probably another several hundred I’ve met over the years and lost touch with. About three thousand people regularly read my blog, so maybe there are a lot of lurkers. A relative handful of the people I know are part of a program of study/mentoring that is sanctioned by their employers. I know of two large companies that are attempting to systematically implement the Rapid Testing methodology, which is organized around skill development, rather than memorizing vocabulary words and templates. Most testers are doing it independently, however, or even in defiance of their employers.

Yes, there is TMap, TPI, ISTQB, ISEB, and many proprietary testing methodologies out there. I see them as crystallized blobs of uncritical folklore; confused thinking about testing frozen in place like fossilized tree sap. These models and procedures have been created by consultants and consulting companies to justify themselves. They neither promote skill or require skill. They promote what I call “ceremonial software testing” rather than systematic critical thinking about complex technology.

Just about the best thing a tester can do to begin to develop testing skill in a big way is not to read or study any test methodology. Ignore vocabulary words. Toss aside templates. No, what that tester should do is read Introduction to General Systems Thinking, by Gerald M. Weinberg. Read it all the way through. Read it, young tester, and feel your mind get blown. Read it, and meditate on its messages, and do the exercises it recommends, and you will find yourself on a new path to testing excellence.

*********************

Noah Sussman, Technical Lead, Etsy:

A surprising number of organizations seem to dramatically underestimate the costs of software testing.

Testability is a feature and tests are a second feature. Having tests depends on the testability of an application. Thus, “testing” entails the implementation and maintenance of two separate but dependent application features. It makes sense then that testing should be difficult and expensive. Yet many enterprise testing efforts do not seem to take into account the fact that testing an application incurs the cost of adding two new, non-trivial features to that application.

There also seems to be a widespread misconception that testing somehow makes application development easier. In fact the opposite is true.

If I may mangle Kernighan: testing is much more difficult than writing the code in the first place. To implement testability and then write tests, one needs first to understand the architecture of the application under test. But testing also requires doing hard things — like input partitioning and path reduction — that are beyond the scope of the application. The reality is that to get good tests, you’re going to have to ask some of your best people to work on the problem (instead of having them work on user-facing application features). Yet many organizations seem not yet to have recognized this.

*********************

Continue Reading

Essential Guide to Mobile App Testing

Testing the Limits With Michael Bolton – Part II

In Part II of our Testing the Limits interview with Michael Bolton, we get his opinions on the corporate fear of Rapid Software Testing; the challenges of coaching testers; the similarities between testing and sex; why programmers should learn to test; writing about testing; negotiating with the other Michael Bolton and more. Did you miss Part I? Then you can find it here.

uTest: It seems that your primary audience is fairly open the principles of RST. But for those who aren’t, what are their main objections? Have the skeptics made you rethink or refine any aspect of RST?

MB: The primary objection appears to be that people are reluctant to give up their safety blankets. They’re frightened of reducing bureaucracy and paperwork, because those things appear to make testing work more legible.  But that’s an illusion:  bureaucracy and paperwork make illusions about testing work more legible. (Have a look at James C. Scott’s Seeing Like a State for a wonderful description of legibility and how the neo-Platonists and the high modernists get it wrong for everyone except—sometimes–for themselves.)  People seem frightened to invest in skill, because even though skillful work can be very well documented, it doesn’t necessarily leave a familiar kind of paper trail.

People seem frightened of telling a different kind of testing story in a different way, because managers have become used to a certain kind of test reporting.  I think most companies are frightened of finding out what’s really going on in a product or a project, because the truth would be pretty horrifying in a lot of cases.  People are frightened of acknowledging the role of skill and tacit knowledge in technical work, because they’re frightened of having to replace people who leave. The assumption there is that the new hire only learns through documentation, but that’s clearly false; we learn through social interactions, by interaction with our products, and by doing meaningful work for which we’re held responsible.

One more thing:  many people whose jobs are titled “quality assurance” seem unwilling to give up their perception of their own authority. People want to be “influential”, to “own quality”, to “be the gatekeeper”, to “speak for the customer”.  I don’t have authority over the project unless I’m the project manager.  As a tester, I don’t have that authority, nor can I claim to speak for the customer more credibly than anyone else. I’ve met testers who believe that it’s their prerogative to tell programmers what to do or how to do it.  I recommend that such testers reflect on how they feel when they’re told what to do by people who’ve never done testing work.

As for what we’ve done to rethink or refine things, that’s a continuous process. James and I are currently working on a few important threads. I learned a great lesson from Jerry Weinberg in his Problem Solving Leadership workshop a few years back.  Someone accounted for a less-than-successful attempt at problem solving by saying “The complexity of the problem screwed us up.”  Jerry peered over the top of his glasses and replied, “Your reaction to the complexity of the problem screwed you up.”  So in the delivery of the class, we’re trying to focus on helping testers to see the simplicity behind complex situations, so the testers can be more confident in their ability to deal with them. On the other hand, we’re also focusing on helping testers to see the complexity behind apparently simple situations, so the testers have the wariness that they need to avoid being fooled too easily.  We’re also working on showing how to use Rapid Testing approaches in highly formalized or highly regulated environments, pointing to our experiences in financial and medical contexts. Excellent format testing (which is often confirmatory or demonstrative) begins with excellent informal testing. Consistent with that, we’re trying to make people aware that the bulk of their work is exploratory in nature, even though they might not have noticed it.  “Formal testing”, which often takes the form of dog-and-pony shows for regulators or auditors, is one thing. Testing to make sure that people don’t die or lose a fortune or pile on some horrible risk is another thing. Excellent testing of the former kind of testing depends on excellent testing of the latter kind.

uTest: If the Context-Driven School of Testing were a traditional school, there’d be a lot of students on the waiting list. In other words, the community is growing rapidly every day. What’s surprised you the most about the emergence of the Context-Driven community thus far?

Continue Reading

Essential Guide to Mobile App Testing

Testing the Limits With Michael Bolton – Part I

Our Testing the Limits “reunion tour” rolls on this month with Michael Bolton, back for another lively session of Q&A. Michael is best known as the founder of DevelopSense, his Toronto-based testing consulting firm, and as a leading figure in Rapid Testing and the Context-Driven school of testing. In short, he’s one of the industry’s most highly regarded writers, speakers and teachers – and it’s a real pleasure to have him back. For more on Michael, be sure to check out his website, blog or follow him on Twitter.

In part I of our healthy two-part interview, we get his thoughts on test cases not being related to testing; the sub-par debate skills of testers; the quality chain of command; objections to Rapid Testing and much more. Be sure to check back tomorrow for Part II. Enjoy!

uTest: It’s been almost two years since our last interview. Where does the time go? We’ve followed you pretty closely during that time (on Twitter, don’t worry), but for those who haven’t, what have they missed? New publications? New courses? New ideas on testing? What’s new with Michael Bolton?

MB: I’ve been traveling like crazy this year, and I’m booked pretty heavily through the end of the year.  I’m beginning to set up my schedule for next year—so if people would like to schedule an in-house class, now is a great time to ask.  For new publications How to Reduce the Cost of Testing, a new book edited by Matt Heusser and Govind Kulkarni, has just been released. I’m pleased to say that I’ve got a chapter in there, with a number of other members of our community.

I don’t specialize in new ideas in testing so much, but rather in refining and reframing ideas we’ve had for years in more specific and, I hope, more useful ways. The other thing that I love to do is to bring ideas from elsewhere into testing.  Currently I’m fascinated by the work of Harry Collins, who studies the sociology of science and the ways in which people develop knowledge and skill. Tacit and Explicit Knowledge is his most recent book; The Shape of Actions is older.  I’m most interested in the idea of repair, which is Collins’ notion for the ways in which people fix up information as they prepare to send it, or as they receive and interpret it.

As an example, I’m 5’ 8” tall.  If I ask you how tall I am in centimeters (and provide you with the ratio of 2.54 centimeters to the inch), you’ll probably do a little math in your head to translate 5’ 8” into 68 inches.  If you do that, it’s because you have tacit knowledge that a foot is 12 inches, and it’s quicker to do five times 12 in your head and add eight than to work it out on the calculator.  Then you’ll report that I’m 173 centimeters (or 172), rather than what the calculator tells you:  172.72.  If you round the answer up or down to a whole centimeter, it’s because you have tacit knowledge that the extra precision is useless when my height changes more than that with every breath. The calculator doesn’t know that, but people often fix up the interaction with the tool, applying that kind of tacit knowledge without noticing that they’re doing it.  Collins argues that we give calculators and computers and machines more credit than they deserve when we ascribe intelligence or knowledge to them, even when we do it casually or informally.

My latest hobby horse is definitely not new, but I’d like to have a go at it anyway.  I’d like to skewer the idea of the test case having any serious relationship to testing.  Test cases are typically examples of what the product should do. That’s important; we often need examples to help explicate requirements and desires. But examples are not tests, so I’d like to call those artifacts example cases or examples rather than test cases. They’re confirmatory, not exploratory; checks, not tests. Brian Marick has written a lot about examples; Matt Heusser has too; so has Gojko Adzic. James Bach has been railing about test cases for a long time.  Often test cases are overly elaborate, expensive to prepare and maintain.  They’d be even more expensive if testers didn’t repair them on the fly, inserting subtle variations making observations that the test case doesn’t specify.  Just as Collins suggests about machines, test cases get more credit than they deserve.  As Pradeep Soundarajan would say, the test case doesn’t find the bug.  The tester finds the bug, and the test case has a role in that.  Now: the development of checks and the interpretation of checks—those things require all kinds of sapience and skill.

A test, to me, is an investigation, not a bit of input and output for a function.  Yet people tend to think of testing in terms of test cases.  Even worse, people count test cases; and even worse than that, they count passing and failing test cases to measure the completeness of project work or testing work.  It’s like evaluating the quality of a newspaper by counting the number of stories in it without reference to the content, the quality of the writing, the quality of the investigation, the relevance of the report, whether a given article contains one story or a dozen, and so forth.  Counting stories would be a ludicrous way of measuring either the quality of the newspaper or the state of the world. Yet, it seems to me, many development and testing organizations try to observe and evaluate testing in this completely shallow and ridiculous way. They do that because seem to think about things in terms of units of production. Learning, discoveries, threats to value, management responses… none of these things are widgets. They not things, either, for that matter.

uTest: In a recent blog post, you wrote about the inability of some testers to properly frame tests, mainly because they haven’t been asked to. Generally speaking, what other qualities or skills do you find testers to be lacking in?

Continue Reading

Essential Guide to Mobile App Testing
Essential Guide to Mobile App Testing