This month, in place of our standard Testing the Limits interview, we decided to hit up a few of our past guests for a “testing roundtable” discussion. The topic: What is the biggest weakness in the way companies test software? Below are some extremely insightful answers from testing experts Michael Bolton, James Bach, Noah Sussman, Dan Bartow, Rex Black, Jim Sivak and Cem Kaner. Enjoy!
Michael Bolton, Principal at DevelopSense:
So far as I can tell, most companies treat software development as implementation of highly idealized business processes, and they treat testing as an exercise in showing that the software models those processes in a way that’s technically correct. At the same time, companies treat the people who use the software as an abstraction. The consequence is that we’re creating software that delays and frustrates the people who use it or are affected by it. When testing is focused almost entirely on checking the functions in the software, we miss enormous opportunities to learn about the real problems that people encounter as they go about their business. Why are testers so often isolated from actual end-users?
Today I was traveling through the airport. When I checked in using the online service, I had accidentally noted that I’d be checking two bags, but I only brought one with me. In addition, my flight was cancelled, and I had to be put on a later flight. The customer service representative could get me onto that flight, but she had serious trouble in printing a boarding pass associated with only one bag; apparently there was a warning message that couldn’t be dismissed, such that her choices were to accept either three bags or none at all. It took fifteen minutes and two other representatives to figure out how to work around the problem. What’s worse is that the woman who was trying to help me apologized for not being able to figure it out, as if it were her responsibility. Software development organizations have managed to convince our customers that they’re responsible for bugs and unforgiving and unhelpful designs.
The success of a software product is only partly based on how it handles the happy path. That’s relatively easy to develop, and it’s relatively easy to check. Real testing, to me, should be based on investigating how the software allows people to deal with what we call “exceptions” or “corner cases”. That’s what we call them, but if we bothered to look, we’d find out that they were a lot more common than we realize; routine, even. Part of my vision of testing is to include a new discipline in which we do significant field research and participant observation. Instead of occasionally inviting customers to the lab (never mind sitting in the lab all by ourselves), we testers—and our organizations—could learn a lot through direct interaction with people who use the software every day; by close collaboration with technical support; and by testing rich and complex scenarios that are a lot closer to real life than simplified, idealized use cases.
James Bach, Author and Consultant, Satisfice:
There is a cluster of issues that each might qualify as the biggest weakness. I’ll pick one of those issues: chronic lack of skill, coupled with the chronic lack of any system for acquiring skill.
Pretty good testing is easy to do (that’s partly why some people like to say “testing is dead”– they think testing isn’t needed as a special focus because they note that anyone can find at least some bugs some of the time).
Excellent testing is quite *hard* to do.
Yet as I travel all over the world, teaching testing and consulting in testing organizations, I see the same pattern almost *everywhere*: testing groups who have but a vague, wispy idea what they are trying to do; experienced testers who barely read about and don’t systematically practice their craft beyond the minimum needed to keep their employers from firing them; testers whose practice is dominated by irrational and ignorant demands of their management, because those testers have done nothing to develop their own credibility; programmers who think their automated checks will save them from disaster in the field.
How does one learn to test? You can’t get an undergraduate degree in testing. I know of two people who have a PhD in testing, one of whom I admire (Meeta Prakash), the other one is, in my view, an active danger to himself and the craft. I personally know, by name, about 150 testers who are systematically and diligently improving their skills. There are probably another several hundred I’ve met over the years and lost touch with. About three thousand people regularly read my blog, so maybe there are a lot of lurkers. A relative handful of the people I know are part of a program of study/mentoring that is sanctioned by their employers. I know of two large companies that are attempting to systematically implement the Rapid Testing methodology, which is organized around skill development, rather than memorizing vocabulary words and templates. Most testers are doing it independently, however, or even in defiance of their employers.
Yes, there is TMap, TPI, ISTQB, ISEB, and many proprietary testing methodologies out there. I see them as crystallized blobs of uncritical folklore; confused thinking about testing frozen in place like fossilized tree sap. These models and procedures have been created by consultants and consulting companies to justify themselves. They neither promote skill or require skill. They promote what I call “ceremonial software testing” rather than systematic critical thinking about complex technology.
Just about the best thing a tester can do to begin to develop testing skill in a big way is not to read or study any test methodology. Ignore vocabulary words. Toss aside templates. No, what that tester should do is read Introduction to General Systems Thinking, by Gerald M. Weinberg. Read it all the way through. Read it, young tester, and feel your mind get blown. Read it, and meditate on its messages, and do the exercises it recommends, and you will find yourself on a new path to testing excellence.
Noah Sussman, Technical Lead, Etsy:
A surprising number of organizations seem to dramatically underestimate the costs of software testing.
Testability is a feature and tests are a second feature. Having tests depends on the testability of an application. Thus, “testing” entails the implementation and maintenance of two separate but dependent application features. It makes sense then that testing should be difficult and expensive. Yet many enterprise testing efforts do not seem to take into account the fact that testing an application incurs the cost of adding two new, non-trivial features to that application.
There also seems to be a widespread misconception that testing somehow makes application development easier. In fact the opposite is true.
If I may mangle Kernighan: testing is much more difficult than writing the code in the first place. To implement testability and then write tests, one needs first to understand the architecture of the application under test. But testing also requires doing hard things — like input partitioning and path reduction — that are beyond the scope of the application. The reality is that to get good tests, you’re going to have to ask some of your best people to work on the problem (instead of having them work on user-facing application features). Yet many organizations seem not yet to have recognized this.
Dan Bartow, VP of Product Management, SOASTA:
In my humble opinion, lack of speed is the biggest weakness in the way companies are currently testing software. It’s a serious problem that has plagued testing since the beginning of software, and it’s becoming more of a pain point than it’s ever been. Most companies have slow processes, cumbersome testing tools, and out-of-date processes that prevent testing from keeping up with, or even being ahead of the SDLC… which is where it needs to be. Initially I thought that I might answer with ‘companies are not doing enough testing’, which is true. However, I think that the real reason why companies are not doing enough testing is because the current widely accepted methods for testing are too slow.
Not long ago, automation was the hottest advance in testing. But just as automation started to catch up with agile software development, the SDLC surged forward with more speed than ever with new advances in continuous integration, train releases, and so forth. I believe that speed must be a key focus in testing in the next few years for it to remain viable as a critical and valuable component of software operations.
Rex Black, President, RBCS:
I’m not sure I can pick just one as the “biggest weakness.” However, from a test management perspective, a major weakness is the paucity of good metrics in common use, and the weakness of the metrics that are used. This is especially true for process metrics, where people are trying to understand the effectiveness, efficiency, and satisfaction associated with their current test process. It’s also true of project and product metrics, especially the types of test metrics that are reported in project status meetings. Some people gather and report too many metrics (often on the wrong things, given the audience). Some people misunderstand what their metrics mean. Some people jump to conclusions about metrics.
This is unfortunate, because we’ve found with our clients that it is possible to define and implement meaningful, helpful metrics. I’m currently working with a client to define service level agreement metrics for a large test outsourcing request for proposal that they are assembling, while working with another client to define the key process indicators for their integrated testing strategy. Metrics definition is challenging work, but it is worthwhile. When you manage with facts and data, you can make smarter decisions.
James Sivak, Director of QA, Unidesk:
It is hard to come up with the biggest weakness across the board as software is so varied. But perhaps there is a common theme that is apropos. I believe that it lies in the fact that many, many companies treat testing as the final step in the software development process–in their eyes as a way to assure that the product is of high quality. This points to the premise that companies believe that quality can be tested into their product.
Rather than looking inward to the process of building quality software, they look to the test team to “break” the software and maximize the found bug count. Of course, the software is already broken by the time it reaches the testers.
Thus the weakness lies in looking at testing as an adjunct activity, separate from development. Time after time, projects end up late because testing has found issues at the end–thus of course putting the blame on the testers for making the project miss its deadlines. It is also independent of the development process–only testing at the end of a Scrum sprint is no different than testing at the end of a waterfall project, it only varies by scale.
Testing has to be an integral part of developing software and not a separate phase. When this approach is taken, product quality is owned by everyone on the team. It is easy to state, but hard to put into practice because of long standing preconceived notions that developers and testers are better kept apart.
What is depressing about this weakness is that it has existed for so long–prominent QA and testing luminaries spoke about this many years ago, and continue to discuss this issue at yearly conferences. The problem is convincing the executives that quality cannot be tested into the product. As Edsger Dijkstra said, “Program testing can be used to show the presence of bugs, but never to show their absence!”
Cem Kaner, Professor of Software Engineering, Florida Institute of Technology:
I don’t know how to estimate “biggest,” but I think an ongoing problem is reliance on relatively low-skill testing. Too many courses, conference talks and books focus on testing basics, or on process, dogma and culture wars instead of skill. We often draw a contrast between exploratory and scripted testing, emphasizing the greater OPPORTUNITY for skilled testing in exploration. I think the contrast is valid, but exploration isn’t magical. An explorer who doesn’t know much about testing probably won’t explore very well. And a skilled test designer can do a lot with manual or automated scripts. I think the contrast also hides other issues. For example, neither approach takes us to high-volume automation, to test designs driven by models, or to creative code-level regression.
What do YOU consider to be the biggest weakness in the way companies test software? Add your thoughts to the conversation in the comments section.