Is Software Testing a Thankless Job?

QA Manager: “Hey boss, you wanted to see me?”

CEO: “Yeah, I just wanted to invite you into my office to let you know what a great job you’re doing. You and your team add so much value to our business. I can’t begin to tell you how appreciative I am for the bugs you find, all the smooth releases and the way you help other departments. Thanks for everything that you do. How would you like a 300% raise?”

QA Manager: “Thanks boss! [pauses] Wait a minute, am I dreaming?”

CEO: Indeed you are! [flies away on a unicorn]

******************

To many in the testing and QA space, the beginning of that conversation is about as realistic as the ending. That is to say that many have come to view software testing as a thankless job. But is it? Certainly not here. And of course, the answer to that question depends largely on your perspective, the company you work for and a host of other factors. But for the sake of this blog post, we’re going to operate under the assumption that software testing is generally one of the more under appreciated jobs in tech.

So what are the traits of a thankless job? In my view, a thankless job is one that gets all the blame when things go wrong, and hardly any credit when things go right. If you’re a sports fan, think NFL placekicker. After attending numerous conferences, conducting dozens of interviews and reading thousands of comments on this blog, I’ve begun to notice that many testers do indeed see themselves as working in a thankless profession.

To help me illustrate this point, I’m going to highlight three particular problem areas that are almost always blamed on testers and the QA departments – and wrongly so, I might add. Here they are:

Missed Bugs
Whenever a bug makes its way into production, the QA department is almost guaranteed to hear something along the lines of, “Why didn’t you catch that bug!?!” This blame originates (in my opinion) from a misunderstanding on what the role of testing and QA is, and what it ought to be. Here’s a great quote from Brian Marick’s Classic Testing Mistakes that perfectly summarizes this problem:

“A first major mistake people make is thinking that the testing team is responsible for assuring quality. This role, often assigned to the first testing team in an organization, makes it the last defense, the barrier between the development team (accused of producing bad quality) and the customer (who must be protected from them). It’s characterized by a testing team (often called the “Quality Assurance Group”) that has formal authority to prevent shipment of the product. That in itself is a disheartening task: the testing team can’t improve quality, only enforce a minimal level. Worse, that authority is usually more apparent than real. Discovering that, together with the perverse incentives of telling developers that quality is someone else’s job, leads to testing teams and testers who are disillusioned, cynical, and view themselves as victims. We’ve learned from Deming and others that products are better and cheaper to produce when everyone, at every stage in development, is responsible for the quality of their work.”

Granted, sometimes testers and QA do deserve the blame for a bug that makes it into production - glaring security holes, basic functionality of core features, etc. But since there’s no such thing as a perfect application, there will always be a few bugs lying around – and it’s not the fault of the QA team.

Missed Deadlines
If your testing team hasn’t been blamed for a missed deadline, then you haven’t been in the business long enough. Unfortunately, testing is seen as the “last line of defense” in many companies. This is a dangerous mindset to adopt. While it may work when an application is delivered in good condition (and on time), it backfires when the application is delivered in poor quality, with a tight deadline approaching. In the case of the latter, when bugs are discovered, testing gets blamed for either not having found them sooner, or for nitpicking.

Testing expert James Sivak addressed this very point in one of our past Testing Roundtable discussions:

…Many companies treat testing as the final step in the software development process–in their eyes as a way to assure that the product is of high quality. This points to the premise that companies believe that quality can be tested into their product. Rather than looking inward to the process of building quality software, they look to the test team to “break” the software and maximize the found bug count. Of course, the software is already broken by the time it reaches the testers.

Thus the weakness lies in looking at testing as an adjunct activity, separate from development. Time after time, projects end up late because testing has found issues at the end–thus of course putting the blame on the testers for making the project miss its deadlines. It is also independent of the development process–only testing at the end of a Scrum sprint is no different than testing at the end of a waterfall project, it only varies by scale.

Without an approach that incorporates testing into all phases of the SDLC – and without an approach that holds every department accountable for quality – testers will continue to get blamed for missed deadlines.

Poor Quality
Apart from missed bugs, test teams often get blamed for poor quality in general – whether it be poor performance, poor usability or any other common end-user complaint. It’s been said by multiple people that quality cannot be tested into a product, yet it’s still a major pain-point for QA and test teams. Like some of my previous points, testers and QA teams are partly to blame for low quality products, but not entirely to blame.

Consider, for instance, the source of the “poor quality” complaint. Let’s say a recently launched applications does not fit a business need for the end user. Here’s author Michael Krigsman with a good example of what I mean:

“The biggest complaints about operational business applications are that they just don’t do what business users wanted. Consequently, employees implement endless workarounds, managers use hidden spreadsheets, and the business fails to benefit from its application investment.”

Can the poor quality described in the quote be attributed to a failure on the part of the test team? Not by a long shot.

*********

I hope to have highlighted a few of major areas where testers wrongly take the blame. I’m sure there are many, many more. So as always, please share your thoughts in the comments section.

And if anyone wants to play devil’s advocate, and argue that testers get too much credit, then by all means. Just don’t expect to make many friends :).

Oh, and one last thing for all the uTesters here: There’s a good discussion in the uTest Forums on how to better communicate the value of testing. It’s worth a read.

Over and out.

Essential Guide to Mobile App Testing

Comments

  1. Pault78 says

    This is one of the best articales I have read about testing, I am a Test Manager now and have been in testing for over 14 years. I was actually in a meeting today and was asked why testing takes so long!! My response was as follows:

    Its simple you delivered a soultion that wasnt fit for purpose, a soultion that hasnt has barely been modified, but 400 issues later and on Release 17, guess what, yes the original estimate for testing has been exceeded!!

    The term ‘hard rock and a bad place’ come to mind.

  2. pawan says

    Testing effort is very hard to justify,We wrote thiusand of testcase and execute more then that during ad-hoc testing but one production bug wiped out all testing effort.
    eventually they forget about our effort and start saying thay “Its a QA miss” .

  3. JeffC says

    Any organization that considers the process of quality assurance to be an important part of a software development project life-cycle will usually have strong support for its practice, at both the process and product level.
    Philip Crosby, the quality expert who said “quality is free” also stated that “quality management is a systematic way of guaranteeing that organized activities happen the way they are planned. It is a management discipline concerned with preventing problems from occurring by creating the attitudes and controls that make prevention possible (Schulmeyer, 2008).”

    In my experience quality assurance is not only something that is a series of activities to be performed during a project life-cycle to evaluate the quality level of a product for a particular purpose, but also a practice that has to be implemented throughout the entire process. Quality management needs to be practiced in all areas – from the point of project initiation to project closure.

    Reference:
    Schulmeyer, G.G.(2008) Handbook of Software Quality Assurance USA: Artech House

  4. JeffC says

    The inability to inspect quality into a product is an assumption that comes from the Total Quality Management (TQM) philosophy (Radhakrishnan, 2008). It is a concept that has found its way from the quality control and management practices within manufacturing environments (especially for Computer Integrated Manufacturing) to general software development environments. In a manufacturing environment it essentially means that any amount of inspection after a product or component is manufactured will not help to improve the quality. The process used to develop the product needs to be evaluated to avoid production of a poor quality product.

    I believe we acknowledge that in many software development environments, product testing (i.e. inspection or evaluation of a product for fitness of use) is often performed too late in the process to deal with many defects in a timely and cost-efficient manner. If quality was designed into the development of products at virtually every stage of the process, we should expect to achieve better levels of quality (Cohen et al., 1998).

    Moreover, one of the foundational aspects in a quality program is how well quality can be built into a product, not how well one can evaluate product quality. While evaluation activities are crucial activities, they alone will not achieve the specified quality. That is, product quality cannot be evaluated (tested, audited, analyzed, measured, or inspected) into the product. Quality can only be “built in” during the development process (Schulmeyer, 2008). Quality must be proactively managed — from product conception to design, production and delivery to the customer (Dunne, 2006).

    When quality is “built in” during the development process the probability of producing a product that adheres to the requirements for its intended use should be much higher. It should enhance the ability to avoid a lot of the re/work that is often needed to fix defects that are caused by a variety of reasons.

    References:
    Cohen, M.L., Rolf, J.E., & Steffey, D.L. (1998) Statistics, Testing, and Defense Acquisition: New Approaches and Methodological Improvements USA: National Academies Press

    Dunne, K.J. (2006) Perspectives on Localization USA: John Benjamins Publishing Company

    Radhakrishnan, P. (2008) CAD/CAM/CIM IND: New Age International

    Schulmeyer, G. (2008) Handbook of Software Quality Assurance USA: Artech House

  5. JeffC says

    Philip Crosby defined quality as simply “conformance to the requirements”. Joseph Juran defined quality as “fitness for use”. The ISO 8402 definition for quality is “the totality of features and characteristics of a product or service that bear on its ability to satisfy a given need”. The ISO 9126 framework defines six product quality characteristics which are used to judge the level of quality of a software product. These include: functionality, reliability, usability, efficiency, maintainability, and portability (Hall & Fernandez-Ramil, 2007; O’Regan, 2002).

    In many organizations the definition of quality can evidently mean different things. For example, the quality of software for an airplane flight control system must meet a different standard than the quality of software for the management of printing from a laser printer. It has been widely recognized that in the development and management of software products many issues and opinions regarding quality abound. This article will focus on just two of the central themes in managing the quality of software: process and product quality.

    Presently we can find many standards, methodologies, guidelines, and maturity models that any organization involved with the management of software development and evolution can use to improve the processes and quality of the products and services provided. The quality of the process used to develop and maintain a product will highly influence the quality level. Methods, processes, and procedures need to be established to evaluate the management of both process and product quality. The experts tell us that quality cannot be inspected into a product (Mutafelija, 2003; Schulmeyer, 2008).

    A quality management principle is that the improvement of product quality can be realized through the continuous improvement of the processes that are used to produce the product (Kenett, 1999). In any software development project it is just as important to define and implement the activities used to determine the effectiveness of the processes used to produce the product as it is to evaluate the quality of the product produced in compliance with established requirements. The effectiveness of the process used to produce the product can be often derived from the results of product evaluations.

    According to Kenett (1999) and Perrin (2008) organizations functioning at CMMi level 4 capabilities allow process and product quality trends to be predicted according to certain quantitative boundaries. At this level management will have clear visibility into all the processes and the ability of using such visibility to make sound decisions for any improvements.

    Determining the quality of software can vary among products. Many products need to meet stringent standards in order to be deemed acceptable for their intended use. On the other hand, there are many other products that are developed and implemented with shortcomings and defects that do not adversely affect their performance or level of acceptance by users or customers. For example, many desktop applications are known to contain numerous defects yet most customers will continue to use the products daily since the defects may not adversely affect the vast majority of the tasks and activities that can be accomplished using the products. Nevertheless, software quality for any product is achieved through the proper relationship of product quality to process quality.

    References:

    Hall, P. & Fernandez-Ramil, J. (2007) Managing the Software Enterprise, Software Engineering and Information Systems in Context. London:Thomson Learning

    Kenett, R. (1999) Software Process Quality: Management and Control USA: Marcel Dekker Incorporated

    Mutafelija, B. (2003) Systematic Process Improvement using ISO 9001:2000 and the CMMI USA: Artech House

    O’Regan, G. (2002) Practical Approach to Software Quality USA: Springer

    Perrin, R. (2008) Real World Project Management : Beyond Conventional Wisdom, Best Practices, and Project Methodologies USA: Wiley

    Schulmeyer, G.G. (2008). Handbook of Software Quality Assurance USA: Artech House

  6. JeffC says

    Software Testing and by extension, Software Quality Assurance can be a contentious topic in many organizations. The fact is that Software Testing is NOT Software Quality Assurance, Software Quality Control or Software Quality Management. It is, however, an important phase to ensure the delivery of products or services with a certain level of quality.

    Software Quality Management is something that must be supported and implemented at all levels of an organization – both at the process and product levels in order to ensure the consistent delivery of high-quality products or services.

    Some time ago I wrote a couple of articles about SQM and quality management as part of the software quality management process. I include them here for your consideration. They do not, however, come close to covering the wide range of issues that are related to software quality management and the processes used (Testing included) to delivery quality products or services.

  7. Tejasvi says

    I have a mixed opinion on this. This used to be the case few years ago but as the IT Industry grew and project teams understood more about each of the functions(testing/arch/ux), a mature organization and a Team have started to think Testing as another important aspect as much as development is.

    With the advent of Scrum and Agile methodology growing popular,- testing – smoke testing – CI – Daily builds and finding failure/defects early is picking up rapidly and the mindset change is going to be change the whole view on Testing. Having said that every function of a SDLC is a thankless job in my opinion.

    But I do agree “Testers are the last line of defense and we need to develop softwares with “zero-defects” or Defect-Free mentality(my manager’s quote) but it’s not just Testers who own it but Everyone Owns the Quality and Success of a Product/Project.

  8. Nandun says

    this article seems like a typical reaction of a QA engineer for their inability live up to their responsibilities.

    I agree that not 100% of bugs can be identified before a product goes live. but that doesn’t mean the QA team is not responsible for the quality of a product that is live.

    QA should really get out of this “we’re helpless in improving quality, we can only find bugs” mindset.

    If “Missed Bugs” and “Poor Quality” are 2 issues that the QA team is not willing to be responsible for, i honestly don’t understand the need for QA.

  9. Sanjay says

    The wrongly perceived role of the Testing Team is that of a culture-cop. They are viewed, typically by developers, as guys who have nothing but bad things to say about OUR code. Senior developers don’t want to challenge the BA’s Design of delivering a solution to the client. End result, QA takes on this implicit role.

    For heavens sake, how many different names for the same shitty product fixes – patch/update/release/security fix. Just last week the supeior iOS had 3 updates – an EFI Firmware update, then a huge 600Mb version update to v10.8.2, and followed by another EFI fix. Did anyone at Apple realize that downloading 600Mb from a remote part of the world using wireless broadband can be next to impossible. Just blame it on the 10 testers in Cupertino.

    Any team/person in ANY organization/society/country who shares the responsibility of identifying issues is going to be viewed negatively. Imagine the White House willingly requesting a press-release to acknowledge its flaws/mistakes. The fact that the press is there to cover an event and then to publish its views, is always going to be viewed negatively. No one wants to be judged let alone by a bunch of Testers !

    Zhou_fin well said.
    Author is on the mark.
    Richard G got me thinking – IT from a company structure point of view is always a Cost Center, and as long this continues, IT in general will be considered a secondary role. Can CEO’s re-structure their companies to say “if IT can develop systems/processes for our company that will improve our productivity/profits/customer satisfaction and stickability/brand value then they will be incentivised”. IT should now evolve into ‘Infrastructure and Technology’ and should become a stake holder.

  10. Zhou_fin says

    This is because some people confused on Testing vs. Quality Assurance.

    Quality Assurance is the overall process for ensuring quality is designed into a system from the inception of requirements to implementation into production.

    Testing is a subset of QA

    Testing is the structured process for validating that requirements are met. Each level of test (Component, Assembly, Product (Application and Integration), Performance, User Acceptance and Operational Readiness) validate different types of requirements.

  11. Long Time Tester says

    I left one out: when you are blamed for bugs in Production that are not in the Test environment. So they give YOU the responsibility of conducting UAT. I thought that UAT was supposed to be done by the users? And when you point out the Production issue that is not in Test, you are given the excuse of “we’ll fix it in the next release.”

    Sure you will.

  12. Long Time Tester says

    What really gets me is when I hear these classic lines of

    1) “Nobody is going to do that” — so you stop doing that in your testing, then when someone does that in Production, they get mad at you.

    2) “You don’t need to do that, it’s over-testing “—- until someone does “that” in Production. If you are not blamed, everyone pretends like the “over-testing” conversation never happened.

    3) You state in your interview that you do NOT know how to do X type of testing, be it security, performance or that you are able to code like someone who has been a developer for 15 years—-then a year later, they are mad that you do not know how to do what you clearly stated you could not do in the first place. Because after all, EVERY tester can do EVERY thing.

    Right?

  13. C "Tornado" S says

    I have always felt that my job in testing was two-fold. 1) find defects in the product where it does not match the expected/spec’d behavior. 2) try and represent the customer and highlight problems that they are likely to encounter as early as possible in the process. In other words, as a tester my job is to find the problems. As it turns out I’m good at it, and because of the constant communication I feel like my job is very much thanked by my developers.

    Feeling the thanks of the developers, however, does not stave off the feeling that testing is a thankless job though. What keeps it coming back is constantly telling others the problems you see and being ignored. Then when the problem occurs you wonder why others are shocked. After the 5th or 6th time of being correct in a row, you begin to feel like nobody is listening and they don’t care until something goes wrong. An example may be stating that to get a project done by a deadline you need 5 testers on the project full time and you get 3 testers an 2 admin assistants, and a partridge in a pear tree. Then when you slip they ask why? After all they gave you 7 living things to help with testing. That mentality is what builds the sense of testing being a thankless job.

  14. Tester says

    Agreed with Rajeev Anand , This is what we do in our company…keep all the stake holders on their toes to maintain quality…so that quality is in their mind every time and we tell them on time that if you release app with issues responsibility will be theirs … and it happens at times that customer reports the same issue which we tried to stop to go on production but as we had warned stake holders so they just can’t blame us.

  15. says

    All said and done, there is a responsibility on the QA shoulders which is to contribute to establish atleast minimal process, stick to it and highlight risks.

    Communication with all stake holders is very important. Especially the QA manager should own up to this responsibility.

    Upfront communication with development teams, product managers about lack of information, about late delivery of code, about lack of unit testing, about fixes being pushed out/overlooked, about lack of time to test, about lack of resources/skills to test, etc goes a long way in gaining respect. Write emails copied to all stake holders without any hesitation. The resulting situation often proves that you are right.

    If communicated in a right way and you have done your best to ensure quality, nobody can blame you for those production bugs. You have the advantage to say ‘See, we warned you upfront, but steps were not taken to correct, hence the problem’.

    After this, people will actually take you seriously.

  16. CT says

    ‘–only testing at the end of a Scrum sprint is no different than testing at the end of a waterfall project, it only varies by scale.’

    That’s why someone invented the V model – to get involvement in requirements at the same time as the developers.

  17. Richard G says

    Everything is a “thankless” job if you put it into that perspective. As someone with 40+ years of IT development across all levels from operator (yes, they get blamed too) through coding, analysis, dba, architect, manager and QA testing. As far as I am concerned it is all just a matter of professionalism at each and every level. Understanding your specific role and, more importantly, knowing who you are working for. Keeping your immediate superior happy is NOT what IT is all about. Anyone who thinks that should not be IT at all. IT is an overhead to the business and needs to appreciate that unless they provide added value to the LOB they are more of a hindrance than a help. So if everyone in IT understands that if they are really working to provide the LOB with the best possible IT products possible then the LOB will appreciate that IT has the right attitude and will ensure that the bean counters don’t outsource their jobs. Speaking for myself I insist upon testing at every level with full LOB participation. In practice this means that developers must be prepared to do walkthroughs with the LOB and the LOB must be prepared to answer questions and give detailed feedback during these sessions. The QA team is present at these meetings and can ask their questions of the LOB. In this controlled setting “bugs” are found and eliminated before they ever get as far as UAT. Yes, the LOB is making a time commitment to the Dev and QA teams but the end results speak for themselves. The LOB gets what exactly what it wants and needs and the DEV and QA team are able to deliver IT products that are professional and of a consistent quality. In essence a win-win-win. Any questions? :-)

  18. Pat says

    These are all very old and long lamented issues. The very same things were being said 30 years ago and 20 years ago and 10 years ago. So it seems what’s old is new again, again, and again. Various approaches have been used over the years to varying degrees of success to mitigate these perceptions. Closer team work, more design review at every stage of the product life cycle, more customer and stake holder involvement, more automations, better automation, more unit testing, better design methods, faster rework capabilities, black box, versus white box, versus gray box, System Testing, Integration testing, Rapid testing, acceptance testing, IEEE standards, 6 sigma, and on and on. Every approach had good reasoning, or at least some incremental rational, and had at least some impact toward improving the most recent measured “failure”. Each also had it’s own cost, and once it’s improvement was achieved, then reducing it’s “cost” usually was it’s eventual doom. It seems as a business management technique, “just keep changing the approach”, is a key to continuation. It is a bit oversimplified, but it meets a key customer satisfaction criteria, that if the last release, method, design, process was not good enough, perfect or right, some incremental change proves the next one may be better. It is definitely the case for every level of job and management that credit is proudly reserved to the highest level that can boast it, and the buck is passed down for blame to the lowest level at which it can be “gotten away with”. It is definitely the case for every level of job and management that credit is proudly reserved to the highest level that can boast it, and the buck is passed down for blame to the lowest level at which it can be “gotten away with”. That’s always, and in ever business. Quality Assurance is not special here, but they are often setup to be the low man on the totem pole, ripe to be used for this purpose. So it seems more important to remember, eventually blame needs to take turns, the fully fired QA staff cannot be at fault after they are all gone, and every business situation is the result of business analysis of the costs, time to market, and demand. In the end it really takes outstanding management and leadership, and the luck of being in the right marketplace at the right time to have product success, more than any other component of a product. Hence changing to try to hit the right demand at the right time strangely makes more sense than it initially seemed.

  19. Jon says

    The other thing is that it’s difficult to really prove the value of QA. Other then number of bugs found, there aren’t too many metrics that you can use to show value.
    QA can save a company a significant amount of money by averting outages that are well beyond the cost of testing. Unfortunetly this is tough to clear show upper management, which is frustrating.

  20. Allen says

    It seems to me another way this under-valuing of testing can be seen in how quick testing budgets and schedules are cut. It is easy to form the impression, when schedule and budget are cut, that testing is not seen as being worth much. Yet when any problem in production it seems the answer is always testing should have found the problem. In other words the only time testing gets valued is right after code moves to production and then suddenly testing is depicted as all knowing and thus responsible for any bugs.

Leave a Reply

Your email address will not be published. Required fields are marked *