Testing Roundtable: What’s the Biggest Weakness in the Way Companies Test?

This month, in place of our standard Testing the Limits interview, we decided to hit up a few of our past guests for a “testing roundtable” discussion. The topic: What is the biggest weakness in the way companies test software? Below are some extremely insightful answers from testing experts Michael Bolton, James Bach, Noah Sussman, Dan Bartow, Rex Black, Jim Sivak and Cem Kaner. Enjoy!

*********************

Michael Bolton, Principal at DevelopSense:

So far as I can tell, most companies treat software development as implementation of highly idealized business processes, and they treat testing as an exercise in showing that the software models those processes in a way that’s technically correct. At the same time, companies treat the people who use the software as an abstraction. The consequence is that we’re creating software that delays and frustrates the people who use it or are affected by it. When testing is focused almost entirely on checking the functions in the software, we miss enormous opportunities to learn about the real problems that people encounter as they go about their business. Why are testers so often isolated from actual end-users?

Today I was traveling through the airport. When I checked in using the online service, I had accidentally noted that I’d be checking two bags, but I only brought one with me. In addition, my flight was cancelled, and I had to be put on a later flight. The customer service representative could get me onto that flight, but she had serious trouble in printing a boarding pass associated with only one bag; apparently there was a warning message that couldn’t be dismissed, such that her choices were to accept either three bags or none at all. It took fifteen minutes and two other representatives to figure out how to work around the problem. What’s worse is that the woman who was trying to help me apologized for not being able to figure it out, as if it were her responsibility. Software development organizations have managed to convince our customers that they’re responsible for bugs and unforgiving and unhelpful designs.

The success of a software product is only partly based on how it handles the happy path. That’s relatively easy to develop, and it’s relatively easy to check. Real testing, to me, should be based on investigating how the software allows people to deal with what we call “exceptions” or “corner cases”. That’s what we call them, but if we bothered to look, we’d find out that they were a lot more common than we realize; routine, even. Part of my vision of testing is to include a new discipline in which we do significant field research and participant observation. Instead of occasionally inviting customers to the lab (never mind sitting in the lab all by ourselves), we testers—and our organizations—could learn a lot through direct interaction with people who use the software every day; by close collaboration with technical support; and by testing rich and complex scenarios that are a lot closer to real life than simplified, idealized use cases.

*********************

James Bach, Author and Consultant, Satisfice:

There is a cluster of issues that each might qualify as the biggest weakness. I’ll pick one of those issues: chronic lack of skill, coupled with the chronic lack of any system for acquiring skill.

Pretty good testing is easy to do (that’s partly why some people like to say “testing is dead”– they think testing isn’t needed as a special focus because they note that anyone can find at least some bugs some of the time).

Excellent testing is quite *hard* to do.

Yet as I travel all over the world, teaching testing and consulting in testing organizations, I see the same pattern almost *everywhere*: testing groups who have but a vague, wispy idea what they are trying to do; experienced testers who barely read about and don’t systematically practice their craft beyond the minimum needed to keep their employers from firing them; testers whose practice is dominated by irrational and ignorant demands of their management, because those testers have done nothing to develop their own credibility; programmers who think their automated checks will save them from disaster in the field.

How does one learn to test? You can’t get an undergraduate degree in testing. I know of two people who have a PhD in testing, one of whom I admire (Meeta Prakash), the other one is, in my view, an active danger to himself and the craft. I personally know, by name, about 150 testers who are systematically and diligently improving their skills. There are probably another several hundred I’ve met over the years and lost touch with. About three thousand people regularly read my blog, so maybe there are a lot of lurkers. A relative handful of the people I know are part of a program of study/mentoring that is sanctioned by their employers. I know of two large companies that are attempting to systematically implement the Rapid Testing methodology, which is organized around skill development, rather than memorizing vocabulary words and templates. Most testers are doing it independently, however, or even in defiance of their employers.

Yes, there is TMap, TPI, ISTQB, ISEB, and many proprietary testing methodologies out there. I see them as crystallized blobs of uncritical folklore; confused thinking about testing frozen in place like fossilized tree sap. These models and procedures have been created by consultants and consulting companies to justify themselves. They neither promote skill or require skill. They promote what I call “ceremonial software testing” rather than systematic critical thinking about complex technology.

Just about the best thing a tester can do to begin to develop testing skill in a big way is not to read or study any test methodology. Ignore vocabulary words. Toss aside templates. No, what that tester should do is read Introduction to General Systems Thinking, by Gerald M. Weinberg. Read it all the way through. Read it, young tester, and feel your mind get blown. Read it, and meditate on its messages, and do the exercises it recommends, and you will find yourself on a new path to testing excellence.

*********************

Noah Sussman, Technical Lead, Etsy:

A surprising number of organizations seem to dramatically underestimate the costs of software testing.

Testability is a feature and tests are a second feature. Having tests depends on the testability of an application. Thus, “testing” entails the implementation and maintenance of two separate but dependent application features. It makes sense then that testing should be difficult and expensive. Yet many enterprise testing efforts do not seem to take into account the fact that testing an application incurs the cost of adding two new, non-trivial features to that application.

There also seems to be a widespread misconception that testing somehow makes application development easier. In fact the opposite is true.

If I may mangle Kernighan: testing is much more difficult than writing the code in the first place. To implement testability and then write tests, one needs first to understand the architecture of the application under test. But testing also requires doing hard things — like input partitioning and path reduction — that are beyond the scope of the application. The reality is that to get good tests, you’re going to have to ask some of your best people to work on the problem (instead of having them work on user-facing application features). Yet many organizations seem not yet to have recognized this.

*********************

Continue Reading

Who Will Be This Year’s Software Test Luminary?

Luminary: a person who has attained eminence in his or her field or is an inspiration to others

You’d be hard-pressed to find a profession with a wider range of ideas and personalities than that of software testing. This point is certainly not lost on our readers, as evidenced by the popularity of our Testing the Limits interview series. And it’s not lost on our good friends at Software Test Professionals, who have opened up nominations for the 2nd Annual Software Test Luminary Award.

More on the nomination process in a second, but first, a little bit about the award itself:

The Luminary award will honor any software testing and quality assurance professional who is determined, persistent, and committed to improving a process or methodology. They develop ideas, which when properly applied, have a positive impact on the end product, either by enhancing quality or performance or resulting in improved efficiencies for a particular process, team or organization. In addition, their contributions elevate the critical role of the software test profession within the software development process.

A luminary is someone who has inspired others by their actions and the results of those actions on the profession. They inspire others to pursue a software testing career. It is about how they have given back, and shared their knowledge and experience with others in order to advance the profession and improve the career paths of all practitioners. A luminary will typically be recognized and respected long after their days of practicing have ended.

If you recall, last year’s honor went to Gerald M, Weinberg, who edged out fellow nominees James Bach and Cem Kaner.

So who will be named this year’s Software Test Luminary? It’s your call. STP will gather nominations and submit the top 3 candidates for a final round of voting. The finalist will be announced at the Software Test Professionals Fall 2011 Conference, October 24-27 in Dallas, Texas.

Here’s a quick timeline of the events:

Continue Reading

Software Testing Classics: Bug Advocacy by Cem Kaner

Last week, I decided to go back in time to revisit a classic work of software testing theory by James Bach, on the subject of risk-based software testing. What I tried to show was that despite tremendous advances in terms of tools, techniques and technology, the fundamentals of software testing essentially remain the same. I hope that was conveyed in the post.

Anyway, in that same spirit, I’m going to quickly bring back another classic work of software testing theory for debate and discussion: Bug Advocacy by Dr. Cem Kaner.

If that name sounds familiar, it’s because Cem Kaner is one of the field’s leading thinkers, teachers and practitioners. He’s also appeared as a guest on our Testing the Limits interview series, which you can find here, here and here.

So what is “Bug Advocacy” all about? The title pretty much says it all (I hope), but let’s take a look at some key excerpts from this 100-page masterpiece to find out more. First the, the premise:

  1. The point of testing is to find bugs.
  2. Bug reports are your primary work product. This is what people outside of the testing group will most notice and most remember of your work.
  3. The best tester isn’t the one who finds the most bugs or who embarrasses the most programmers. The best tester is the one who gets the most bugs fixed.
  4. Programmers operate under time constraints and competing priorities. For example, outside of the 8-hour workday, some programmers prefer sleeping and watching Star Wars to fixing bugs.

A bug report is a tool that you use to sell the programmer on the idea of spending her time and energy to fix a bug.

Motivating the Bug Fixer
Some things that will often make programmers want to fix the bug:

The “Jedi Knights” of Context-Driven Software Testing

The first rule of Fight Club is: you do not talked about Fight Club. Lucky for us, that rule does not apply to the Context-Driven School of Software Testing.

In case you hadn’t noticed, the context-driven school has amassed a global following in just a few short years years, despite some initial confusion on the part of newbies…What is a context-driven tester? What is the basic premise? How is it different from the other prominent “schools” of testing? And what does one have to do to become a member?

James Bach – the founding father of CDT – posted a great overview of the principles this past weekend in an article titled “The Dual Nature Of Context-Driven Testing.” He offers some key distinctions on what the term means, what it doesn’t mean, and how you can grow as a tester by learning more about its principles. Here are a few important excerpts (emphasis mine), beginning with an abridged definition:

The Context-Driven School of software testing is a way of thinking about testing, AND a small but world-wide community of like-minded testers. There are other, larger, schools of testing thought. But CDT represents my paradigm of testing. By paradigm, I mean an organizing worldview, an ontology, a set of fundamental beliefs.

CDT is not a style of testing. It’s not a toolbox of methods. It’s more fundamental than that. You could think of  CDT partly as an ethical position about testing. All methods or styles are available to Context-Driven people, but our selection of methods and reactions to testing situations are conditioned by our ethical position. This position is defined here.

Reading further, it occurred to me that the context-driven school is well-represented on the uTest Blog. To illustrate this alliance, I’ve included links to the names of those “Jedi Knights” who have made contributions on this site. Here’s the excerpt:

Continue Reading

Picture Quiz: Should You Become A Software Tester?

We write frequently on the subject of what it takes to become a top tester – in both the uTest community and the industry as a whole. We ask the testing giants their thoughts on the matter (see quotes below) and publish guest posts and Crash Courses in an effort to help you become a better software tester.  Please hold your applause.

But what if software testing isn’t for you? What if after all the education, training and job-searching, you discovered that you really had no knack for the craft? Wouldn’t it have been nice to know that a little sooner? Lucky for you, I’ve designed this picture quiz as a humorous supplement to the Jung Career Indicator Test. Here’s how it works: If you don’t see anything wrong with these photos, then software testing is definitely NOT for you. Far from scientific, but hey, it’s a start.

Almost all of the best people I know in testing have significant experience in other fields. It’s common for people to move from testing to programming or writing or marketing and then back, bringing what they’ve learned with them, to test with a richer perspective and with a much more productive vision of where testing can fit within development/marketing/support cycles.” – Cem Kaner

Continue Reading

10 Things Software Testers Should Never Say

Just as you wouldn’t want your surgeon to say “Boy! I haven’t done this in a long, long time”, here are ten things you wouldn’t want to hear from your software testers:

1. “It’s bug-free, I guarantee it.”

2. “No one uses Firefox anyway.”

3. “Cem Kaner and James Bach don’t know what they’re talking about.”

4. “Can it wait? I’m playing Farmville.”

5. “It works on my computer, so it must be okay.”

6. “I just posted our security bugs on Twitter.”

Continue Reading

Trading Places: 8 Alternate Careers For Software Testers

We often ask our Testing the Limits guests what they would do in a world with no need for software testers. So far, answers have included mandolin player, pilot, stand-up comedian, sports announcer, werewolf hunter and other typical trades. This got us to thinking, “what other careers would software testers be good at?

Not that we’re encouraging you to leave the testing profession, but if you absolutely had to, here are a few options for you to consider:

1. Software developer / engineer: Aside from werewolf hunter, this is probably the most obvious career alternative, as great testers will eventually acquire the skills and understanding needed to succeed as a developer. But as a blog reader recently brought to my attention, this works both ways. He said that at his former place of employment, developers aspire to be testers, NOT the other way around. He writes, ” the Tester/QA path is the destination/pinnacle of the career path in SW development. You start out as a Jr. Programmer then… Sr. Programmer then… eventually Architect/System Designer…then…you eventually make it to Testing. Their thinking was that you can’t adequately test until you have proper understanding of the development process. In other words, you are truly considered an expert by the time you get to that level.”

2. Detective: Much like a detective, a tester’s bug-hunting prowess will depend largely on intuition – i.e. knowing the right questions to ask and having a sixth sense for odd irregularities. Testers are already possess the other traits found in successful detectives, including sound logic, analytical skills and patience. The only thing they are missing is an assistant named Watson and a trench coat. Both are available on Craigslist.

3. Journalist: There’s a very thin line between a tester and an investigative reporter. Like their journalistic counterparts, QA professionals must ask tough questions, dive deep into complex issues and report them to the layman in a clear, concise and objective manner. The hours stink and the pay is terrible, but there is some downside to the job however.

Continue Reading

Testing the Limits with James Bach – Part II

In part II of our Testing the Limits interview with James Bach, we get his thoughts on the latest and greatest testing tools; why testers need to stop faking it; the misguided assumption that the web is dead; his collaboration with Cem Kaner; his upcoming speaking tour and more. If you missed the first half of the conversation, you can find it here. Enjoy!

*********

uTest: We’re always on the lookout for new testing tools and technologies. Have any particular ones caught your eye in 2010?

JB: Yes, it’s called Rapid Reporter: It seems to be a wonderfully lightweight tool for keeping session-based test notes. I have also fallen in love with Dia and Inkscape, both free portable diagramming tools. PDFTK for manipulating PDFs and ImageMagick for images.

I have recently benefited from the tools at Random.org and the statistical software freely available at NIST.gov.

I’m also a huge fan of Dropbox – the only tool I’ve mentioned that costs money. So far it’s been money well spent.

uTest: In the interest of raising software quality – If you could tattoo one slogan on the forehead of every test lead, what would it be? How about for tech execs (CTOs, VPs of Engineering, et al)? Asked differently, what is the one quality-related rule you wish all tech executives would follow without exception?

JB: If I could, I wouldn’t. The surest way to make test leads hate me would be to use my vast and puzzling powers to mess with their heads.

But one command I would like to give all of them is “stop faking it.”

So much of the trouble I see in projects comes from people desperately faking their work. Many testers will allow themselves to be intimidated into doing things they know are wrong. They often justify it as “pleasing the customer” or “being practical”, but I don’t buy it. It’s not practical to allow your management to blunder off a cliff without even trying to warn them. It’s not customer satisfaction to push them off the cliff, even if they ask you to. When they get to the bottom, they won’t be satisfied anymore, and they may well come gunning for you.

I recently encountered a 157 page test procedure that had maybe five pages of actual useful information in it. The tester who wrote it told me that’s what the client wanted. You know what I did when I inherited it? I deleted most of it (about 145 pages of it) and began a slow rewrite. My client doesn’t like hearing that I’m still tinkering with it, but he does like the lucid and logical text, the diagrams, the equations that are actually correct and meaningful, etc. He seems to rather have a good piece of work than a faked test procedure.

Continue Reading

Vote for This Year’s Software Testing Luminary

The good folks over at Software Test Professionals want to remind you about a very important election this Fall. No, we’re not talking about the U.S. Congress. And no, we’re not referring to American Idol either (at least not in this post).  Instead, we’re talking about something lasting and meaningful: the 1st Annual Luminary Award.

As described on their award page, this honor will “recognize a person in the software testing and quality community, who inspires others and dedicates their career to industry advancement.” The organizers were looking for someone who has dedicated their career to the betterment of software testing and quality; who has shown exceptional leadership and who has educated, promoted and published on behalf of the industry. In other words, a software testing luminary.

With that type of criteria in mind, we’re not surprised to see Cem Kaner, James Bach and Jerry Weinberg as this year’s finalists. You may know Kaner and Bach from our recent Testing the Limits interviews (Jerry, if you’re reading this, we’d love to have you as a guest as well). But in case you’re unfamiliar with these testing giants, here are clips from their award bios:

Continue Reading

Testing the Limits with Cem Kaner – Part III

In part III of our of Testing the Limits interview with Cem Kaner, we discuss why “best practices” is merely a marketing tool; Silicon Valley of yesteryear; his upcoming CAST lecture on investment model and exploratory test automation; the blogs he reads and much more. To recap, here’s part I and part II.

uTest: What’s surprised you the most about the testing industry since you’ve been in the game?

Kaner: Context-dependence was a big surprise to me. It took me about ten years before I reluctantly accepted the idea that my favorite test techniques, attitudes, and life-cycle models were appropriate in situations similar to the ones where I developed my preferences, but not so appropriate in situations that were quite different.

It took me a long time to learn that developing software under a well-specified contract is a common and respectable activity that requires different tradeoffs from mass-market software sold to people who had no say in its design and buy only after it is ready to deliver and only if they like it. It took me a long time to sort out differences between scientific programming (where I started) and consumer software development. It took me even longer to see that testers working in an independent test lab operate under fundamentally different constraints from testers working inside the development company and provided different types of information. And it took well over a decade before I accepted the idea that two different development managers, running essentially the same project, could legitimately want different information from their testers and that it could make sense and be ethical for the two different test groups to structure their work differently, finding or being blind to or ignoring different classes of bugs, in order to satisfy the information needs of their key stakeholders.

The last major event in this chain happened 14 years and two books after I came to Silicon Valley. A client paid me to tour a lot of companies in California, Oregon and Washington. I gave a talk at each place, but I also talked with them about their business models, their testing challenges and methods.

Most of these were good companies with competent testing staff but they did things very differently from each other, often very differently from what I (in my ignorance) would have recommended, but in ways that addressed the risks they were trying to manage. It was already my practice to try to understand what worked at a client site, and why it worked, rather than to evaluate what they were doing against my prejudged ideas. But the diversity in this series of clients was overwhelming. It caused me to abandon many of my favorite ideas about development and testing—not as bad ideas, but as good ones only under suitable circumstances.

uTest: You’re shaping the minds of future testers… so how do you “future-proof” your teachings?  And what major changes do you think will impact software testing by the end of this decade?

Kaner: What I DON’T do is try to slow the field down.

I don’t pretend that there are One True Definitions for any testing terms, because different groups of testers see the craft differently and they use their language accordingly. If two testers have different ideas about the purpose and goals of testing, they are likely to have different meanings, or at least different nuances, in their definitions of “test case.” I don’t go to standards boards to try to legislate my favorite meaning as the official one. This effort to lock down a field that is still in motion, still finding itself, will primarily benefit people selling certification courses. In terms of helping their students prepare for the future, I think that this gives an illusion of certainty and uniformity to people who should be training to embrace questionability and diversity.

Continue Reading