QA Manager: “Hey boss, you wanted to see me?”
CEO: “Yeah, I just wanted to invite you into my office to let you know what a great job you’re doing. You and your team add so much value to our business. I can’t begin to tell you how appreciative I am for the bugs you find, all the smooth releases and the way you help other departments. Thanks for everything that you do. How would you like a 300% raise?”
QA Manager: “Thanks boss! [pauses] Wait a minute, am I dreaming?”
CEO: Indeed you are! [flies away on a unicorn]
To many in the testing and QA space, the beginning of that conversation is about as realistic as the ending. That is to say that many have come to view software testing as a thankless job. But is it? Certainly not here. And of course, the answer to that question depends largely on your perspective, the company you work for and a host of other factors. But for the sake of this blog post, we’re going to operate under the assumption that software testing is generally one of the more under appreciated jobs in tech.
So what are the traits of a thankless job? In my view, a thankless job is one that gets all the blame when things go wrong, and hardly any credit when things go right. If you’re a sports fan, think NFL placekicker. After attending numerous conferences, conducting dozens of interviews and reading thousands of comments on this blog, I’ve begun to notice that many testers do indeed see themselves as working in a thankless profession.
To help me illustrate this point, I’m going to highlight three particular problem areas that are almost always blamed on testers and the QA departments – and wrongly so, I might add. Here they are:
Whenever a bug makes its way into production, the QA department is almost guaranteed to hear something along the lines of, “Why didn’t you catch that bug!?!” This blame originates (in my opinion) from a misunderstanding on what the role of testing and QA is, and what it ought to be. Here’s a great quote from Brian Marick’s Classic Testing Mistakes that perfectly summarizes this problem:
“A first major mistake people make is thinking that the testing team is responsible for assuring quality. This role, often assigned to the first testing team in an organization, makes it the last defense, the barrier between the development team (accused of producing bad quality) and the customer (who must be protected from them). It’s characterized by a testing team (often called the “Quality Assurance Group”) that has formal authority to prevent shipment of the product. That in itself is a disheartening task: the testing team can’t improve quality, only enforce a minimal level. Worse, that authority is usually more apparent than real. Discovering that, together with the perverse incentives of telling developers that quality is someone else’s job, leads to testing teams and testers who are disillusioned, cynical, and view themselves as victims. We’ve learned from Deming and others that products are better and cheaper to produce when everyone, at every stage in development, is responsible for the quality of their work.”
Granted, sometimes testers and QA do deserve the blame for a bug that makes it into production - glaring security holes, basic functionality of core features, etc. But since there’s no such thing as a perfect application, there will always be a few bugs lying around – and it’s not the fault of the QA team.
If your testing team hasn’t been blamed for a missed deadline, then you haven’t been in the business long enough. Unfortunately, testing is seen as the “last line of defense” in many companies. This is a dangerous mindset to adopt. While it may work when an application is delivered in good condition (and on time), it backfires when the application is delivered in poor quality, with a tight deadline approaching. In the case of the latter, when bugs are discovered, testing gets blamed for either not having found them sooner, or for nitpicking.
Testing expert James Sivak addressed this very point in one of our past Testing Roundtable discussions:
…Many companies treat testing as the final step in the software development process–in their eyes as a way to assure that the product is of high quality. This points to the premise that companies believe that quality can be tested into their product. Rather than looking inward to the process of building quality software, they look to the test team to “break” the software and maximize the found bug count. Of course, the software is already broken by the time it reaches the testers.
Thus the weakness lies in looking at testing as an adjunct activity, separate from development. Time after time, projects end up late because testing has found issues at the end–thus of course putting the blame on the testers for making the project miss its deadlines. It is also independent of the development process–only testing at the end of a Scrum sprint is no different than testing at the end of a waterfall project, it only varies by scale.
Without an approach that incorporates testing into all phases of the SDLC – and without an approach that holds every department accountable for quality – testers will continue to get blamed for missed deadlines.
Apart from missed bugs, test teams often get blamed for poor quality in general – whether it be poor performance, poor usability or any other common end-user complaint. It’s been said by multiple people that quality cannot be tested into a product, yet it’s still a major pain-point for QA and test teams. Like some of my previous points, testers and QA teams are partly to blame for low quality products, but not entirely to blame.
Consider, for instance, the source of the “poor quality” complaint. Let’s say a recently launched applications does not fit a business need for the end user. Here’s author Michael Krigsman with a good example of what I mean:
“The biggest complaints about operational business applications are that they just don’t do what business users wanted. Consequently, employees implement endless workarounds, managers use hidden spreadsheets, and the business fails to benefit from its application investment.”
Can the poor quality described in the quote be attributed to a failure on the part of the test team? Not by a long shot.
I hope to have highlighted a few of major areas where testers wrongly take the blame. I’m sure there are many, many more. So as always, please share your thoughts in the comments section.
And if anyone wants to play devil’s advocate, and argue that testers get too much credit, then by all means. Just don’t expect to make many friends :).
Oh, and one last thing for all the uTesters here: There’s a good discussion in the uTest Forums on how to better communicate the value of testing. It’s worth a read.
Over and out.