Testing Roundtable: What’s the Biggest Weakness in the Way Companies Test?

This month, in place of our standard Testing the Limits interview, we decided to hit up a few of our past guests for a “testing roundtable” discussion. The topic: What is the biggest weakness in the way companies test software? Below are some extremely insightful answers from testing experts Michael Bolton, James Bach, Noah Sussman, Dan Bartow, Rex Black, Jim Sivak and Cem Kaner. Enjoy!


Michael Bolton, Principal at DevelopSense:

So far as I can tell, most companies treat software development as implementation of highly idealized business processes, and they treat testing as an exercise in showing that the software models those processes in a way that’s technically correct. At the same time, companies treat the people who use the software as an abstraction. The consequence is that we’re creating software that delays and frustrates the people who use it or are affected by it. When testing is focused almost entirely on checking the functions in the software, we miss enormous opportunities to learn about the real problems that people encounter as they go about their business. Why are testers so often isolated from actual end-users?

Today I was traveling through the airport. When I checked in using the online service, I had accidentally noted that I’d be checking two bags, but I only brought one with me. In addition, my flight was cancelled, and I had to be put on a later flight. The customer service representative could get me onto that flight, but she had serious trouble in printing a boarding pass associated with only one bag; apparently there was a warning message that couldn’t be dismissed, such that her choices were to accept either three bags or none at all. It took fifteen minutes and two other representatives to figure out how to work around the problem. What’s worse is that the woman who was trying to help me apologized for not being able to figure it out, as if it were her responsibility. Software development organizations have managed to convince our customers that they’re responsible for bugs and unforgiving and unhelpful designs.

The success of a software product is only partly based on how it handles the happy path. That’s relatively easy to develop, and it’s relatively easy to check. Real testing, to me, should be based on investigating how the software allows people to deal with what we call “exceptions” or “corner cases”. That’s what we call them, but if we bothered to look, we’d find out that they were a lot more common than we realize; routine, even. Part of my vision of testing is to include a new discipline in which we do significant field research and participant observation. Instead of occasionally inviting customers to the lab (never mind sitting in the lab all by ourselves), we testers—and our organizations—could learn a lot through direct interaction with people who use the software every day; by close collaboration with technical support; and by testing rich and complex scenarios that are a lot closer to real life than simplified, idealized use cases.


James Bach, Author and Consultant, Satisfice:

There is a cluster of issues that each might qualify as the biggest weakness. I’ll pick one of those issues: chronic lack of skill, coupled with the chronic lack of any system for acquiring skill.

Pretty good testing is easy to do (that’s partly why some people like to say “testing is dead”– they think testing isn’t needed as a special focus because they note that anyone can find at least some bugs some of the time).

Excellent testing is quite *hard* to do.

Yet as I travel all over the world, teaching testing and consulting in testing organizations, I see the same pattern almost *everywhere*: testing groups who have but a vague, wispy idea what they are trying to do; experienced testers who barely read about and don’t systematically practice their craft beyond the minimum needed to keep their employers from firing them; testers whose practice is dominated by irrational and ignorant demands of their management, because those testers have done nothing to develop their own credibility; programmers who think their automated checks will save them from disaster in the field.

How does one learn to test? You can’t get an undergraduate degree in testing. I know of two people who have a PhD in testing, one of whom I admire (Meeta Prakash), the other one is, in my view, an active danger to himself and the craft. I personally know, by name, about 150 testers who are systematically and diligently improving their skills. There are probably another several hundred I’ve met over the years and lost touch with. About three thousand people regularly read my blog, so maybe there are a lot of lurkers. A relative handful of the people I know are part of a program of study/mentoring that is sanctioned by their employers. I know of two large companies that are attempting to systematically implement the Rapid Testing methodology, which is organized around skill development, rather than memorizing vocabulary words and templates. Most testers are doing it independently, however, or even in defiance of their employers.

Yes, there is TMap, TPI, ISTQB, ISEB, and many proprietary testing methodologies out there. I see them as crystallized blobs of uncritical folklore; confused thinking about testing frozen in place like fossilized tree sap. These models and procedures have been created by consultants and consulting companies to justify themselves. They neither promote skill or require skill. They promote what I call “ceremonial software testing” rather than systematic critical thinking about complex technology.

Just about the best thing a tester can do to begin to develop testing skill in a big way is not to read or study any test methodology. Ignore vocabulary words. Toss aside templates. No, what that tester should do is read Introduction to General Systems Thinking, by Gerald M. Weinberg. Read it all the way through. Read it, young tester, and feel your mind get blown. Read it, and meditate on its messages, and do the exercises it recommends, and you will find yourself on a new path to testing excellence.


Noah Sussman, Technical Lead, Etsy:

A surprising number of organizations seem to dramatically underestimate the costs of software testing.

Testability is a feature and tests are a second feature. Having tests depends on the testability of an application. Thus, “testing” entails the implementation and maintenance of two separate but dependent application features. It makes sense then that testing should be difficult and expensive. Yet many enterprise testing efforts do not seem to take into account the fact that testing an application incurs the cost of adding two new, non-trivial features to that application.

There also seems to be a widespread misconception that testing somehow makes application development easier. In fact the opposite is true.

If I may mangle Kernighan: testing is much more difficult than writing the code in the first place. To implement testability and then write tests, one needs first to understand the architecture of the application under test. But testing also requires doing hard things — like input partitioning and path reduction — that are beyond the scope of the application. The reality is that to get good tests, you’re going to have to ask some of your best people to work on the problem (instead of having them work on user-facing application features). Yet many organizations seem not yet to have recognized this.


Dan Bartow, VP of Product Management, SOASTA:

In my humble opinion, lack of speed is the biggest weakness in the way companies are currently testing software. It’s a serious problem that has plagued testing since the beginning of software, and it’s becoming more of a pain point than it’s ever been. Most companies have slow processes, cumbersome testing tools, and out-of-date processes that prevent testing from keeping up with, or even being ahead of the SDLC… which is where it needs to be. Initially I thought that I might answer with ‘companies are not doing enough testing’, which is true. However, I think that the real reason why companies are not doing enough testing is because the current widely accepted methods for testing are too slow.

Not long ago, automation was the hottest advance in testing. But just as automation started to catch up with agile software development, the SDLC surged forward with more speed than ever with new advances in continuous integration, train releases, and so forth. I believe that speed must be a key focus in testing in the next few years for it to remain viable as a critical and valuable component of software operations.


Rex Black, President, RBCS:

I’m not sure I can pick just one as the “biggest weakness.” However, from a test management perspective, a major weakness is the paucity of good metrics in common use, and the weakness of the metrics that are used. This is especially true for process metrics, where people are trying to understand the effectiveness, efficiency, and satisfaction associated with their current test process. It’s also true of project and product metrics, especially the types of test metrics that are reported in project status meetings. Some people gather and report too many metrics (often on the wrong things, given the audience). Some people misunderstand what their metrics mean. Some people jump to conclusions about metrics.

This is unfortunate, because we’ve found with our clients that it is possible to define and implement meaningful, helpful metrics. I’m currently working with a client to define service level agreement metrics for a large test outsourcing request for proposal that they are assembling, while working with another client to define the key process indicators for their integrated testing strategy. Metrics definition is challenging work, but it is worthwhile. When you manage with facts and data, you can make smarter decisions.


James Sivak, Director of QA, Unidesk:

It is hard to come up with the biggest weakness across the board as software is so varied. But perhaps there is a common theme that is apropos. I believe that it lies in the fact that many, many companies treat testing as the final step in the software development process–in their eyes as a way to assure that the product is of high quality. This points to the premise that companies believe that quality can be tested into their product.

Rather than looking inward to the process of building quality software, they look to the test team to “break” the software and maximize the found bug count. Of course, the software is already broken by the time it reaches the testers.

Thus the weakness lies in looking at testing as an adjunct activity, separate from development. Time after time, projects end up late because testing has found issues at the end–thus of course putting the blame on the testers for making the project miss its deadlines. It is also independent of the development process–only testing at the end of a Scrum sprint is no different than testing at the end of a waterfall project, it only varies by scale.

Testing has to be an integral part of developing software and not a separate phase. When this approach is taken, product quality is owned by everyone on the team. It is easy to state, but hard to put into practice because of long standing preconceived notions that developers and testers are better kept apart.

What is depressing about this weakness is that it has existed for so long–prominent QA and testing luminaries spoke about this many years ago, and continue to discuss this issue at yearly conferences. The problem is convincing the executives that quality cannot be tested into the product. As Edsger Dijkstra said, “Program testing can be used to show the presence of bugs, but never to show their absence!”


Cem Kaner, Professor of Software Engineering, Florida Institute of Technology:

I don’t know how to estimate “biggest,” but I think an ongoing problem is reliance on relatively low-skill testing. Too many courses, conference talks and books focus on testing basics, or on process, dogma and culture wars instead of skill. We often draw a contrast between exploratory and scripted testing, emphasizing the greater OPPORTUNITY for skilled testing in exploration. I think the contrast is valid, but exploration isn’t magical. An explorer who doesn’t know much about testing probably won’t explore very well. And a skilled test designer can do a lot with manual or automated scripts. I think the contrast also hides other issues. For example, neither approach takes us to high-volume automation, to test designs driven by models, or to creative code-level regression.


What do YOU consider to be the biggest weakness in the way companies test software? Add your thoughts to the conversation in the comments section.

Essential Guide to Mobile App Testing


  1. says

    I’m no longer positive where you are getting your information, however good topic. I must spend a while learning more or understanding more. Thank you for fantastic info I was looking for this info for my mission.

  2. Mark says

    The ideas in the main article all seem very valid to me.

    Sadly, many of the comments made in response to the main article seem to have missed the point completely.

    As has been said….testing well is hard, much harder than many of you seem to understand.

    Even though I reject the term ‘manual tester’, I will use it here for clarity.

    Manual testing is already a highly technical job which requires technical skills and job focus which both overlap and are distinct from development skill and focus.

    To equate manual testing to monkey testing is an insult to your own humanity and totally misunderstands manual testing.

    Good manual testing is a grey box method; it requires intelligence, domain knowledge, test knowledge and technical knowledge to do it right.

    Just because a manual tester is not writing code does not mean they are not technical. A good manual tester is an expert in a difficult technical field….it is just a different field from development.

    Having worked with many ‘Developers in Test’ who fail to deliver on any level, I would hire a seasoned manual tester for a test role over a development person any day.

    Finally, all contexts are different. Just because Google does things one way does not mean it will work for you.

    We can ‘crowd test’ a social media app, we cannot ‘crowd test’ a payroll or a train control system. This is just one example where you cannot apply practises out of context and expect them to work.

  3. Adrian Hague says

    I’ve spent just over 25 years in various QA / QC positions, testing everything from petrochemical production processes to pharmaceuticals and games.

    Certainly with respect to the (UK) console games industry, the biggest weakness in the way companies test is not the testing itself, but the personnel. In game development, the lowest prestige role is that of the tester, probably due to the misconception that ‘anyone’ can do testing (which is true, anyone can do testing – badly).

    The strategy of most publishers is to hire a group of (typically young) people, with little-to-no experience. When the project is complete, the QA staff are laid off. This is because it is expensive to keep QA staff around during the initial phases of design, when there is nothing for testers to do.

    Furthermore, unlike pretty much all other job roles within the industry, there is no clear career path for testers. Most people get into testing as a springboard to another position, because testing itself is seen as a dead-end. One *very* large publisher that has been in the business for over 10 years, only last year initiated a career progression structure for QA staff. In many ways, the biggest impediment to QA career progression in the games industry is the industry itself.

    @ARC – Must admit, I’m a Black Box testing advocate. If testers are as conversant with programming with the coders, what’s to stop them from making the same mistakes? The strength of BB testing is in its ability to test in unexpected ways, ways which may be curtailed by over familiarity with the subject. YMMV, natch :)

  4. Shahied Luddy says


    also see Tim Coulson’s comment – of which I think is very true.

    With regards to the ‘Technical skills’ no doubt that it can mean to gain technical skills in your area of testing but in my view can also mean skills as in what Tim mentioned. It can also mean Business analysis skills + pure testing skills.

    Note not all testers are due to become developers and some testers choose testing as a career. I went from being a developer to a tester for close to 13 years now.

    Sure in black box testing (where most of my views come from) some basic SQL scripting skills or some web.config knowledge or some IIS knowledge OR some Unix skills can be helpful depending on your area of work, but this is mostly on the job training in my view and getting to know your environment at work.

    Nothing stopping you from broadening your own skills though, but I do not think learning a developer language will make you a better tester ;-)

    So like I stated before, from my side, probably the test estimation is one of our biggest weaknesses + what Tim said + the inferiority complex from our side as testers versus the domineering complex from the developer side.


  5. says

    I agree with much of what has been said, but I actually think those are nth order effects of the “Biggest” problem, which involves the fact that the overwhelming number of testers have no idea what is leading “the bosses” to sign their paychecks, and “the bosses” have no idea how to ask for (let very much alone get) the value they’re paying for.

    Yes, low skill (and others from above) could at one point have been how we got her, but that’s all “chicken and the egg” now.

    For more, see my recent blog post: A Context-Driven Approach to Delivering Business Value (http://scott-barber.blogspot.com/2012/03/context-driven-approach-to-delivering.html)

  6. ARC says

    I guess that is what I am proposing when I said that the best testers I’ve met were the best developers in the group. And if we reread the expert’s statements, they mostly refer to lack of technical skills.

    Let’s see what these folks are really trying to say.

    Michael Bolton said:
    - Why are testers so often isolated from actual end-users?
    - Real testing, to me, should be based on investigating how the software allows people to deal with what we call “exceptions” or “corner cases”.
    - we testers—and our organizations—could learn a lot through direct interaction with people who use the software every day; by close collaboration with technical support; and by testing rich and complex scenarios that are a lot closer to real life than simplified, idealized use cases.

    Well guess what? Knowing how the end user will use the product is what developers are supposed to do. Exceptions and corner cases are hard to find at the highest level, but can be found at the lower unit and subsystem type level. Take the performance and data validaty of database entries for example. It is not feasible to test the corner cases at the user/GUI level to see if database ‘sharding’ is correctly happening or load balancing is triggered nor data integrity is sound

    James Bach said:

    - chronic lack of skill
    - coupled with the chronic lack of any system for acquiring skill.
    - Excellent testing is quite *hard* to do.

    Bingo. LACK OF SKILL. The lack of system for acquiring skill is a management issue as well as an HR issue. Perhaps a revolving assignment of developer-test might fix this lack of system? I don’t know. But obviously, LACK OF SKILL is a real issue.

    Bach talks a lot about skill development and not certification (‘ceremonial testing’).
    Noah Sussman said:

    - testing is much more difficult than writing the code in the first place. To implement testability and then write tests, one needs first to understand the architecture of the application under test.
    - testing also requires doing hard things — like input partitioning and path reduction — that are beyond the scope of the application. The reality is that to get good tests, you’re going to have to ask some of your best people to work on the problem (instead of having them work on user-facing application features).

    Doing the hard things beyond the scope of the application requires technical skills, which developers have and which architects can perform superbly. Can a QA engineer with zero development skill tackle this?

    Dan Bartow said:
    - lack of speed is the biggest weakness in the way companies are currently testing software.
    - Most companies have slow processes, cumbersome testing tools, and out-of-date processes that prevent testing from keeping up with

    Agree. Computer computational speed is much faster than a human’s. How much can you have the computer test versus people keyboard and mouse testing? Cumbersome testing tools, out of date processes can be alleviated if you have people in house to create in house tools so one is not at the mercy of these 3rd party software tools. Let the computer do a lot of the testing.

    Rex Black :
    He only wrote about metrics, and metrics are like statistics. One can use it for the wrong reasons.

    James Sivak, Director of QA, Unidesk:

    - Testing has to be an integral part of developing software and not a separate phase. When this approach is taken, product quality is owned by everyone on the team. It is easy to state, but hard to put into practice because of long standing preconceived notions that developers and testers are better kept apart.

    And that is my point. SQA Engineers has to move -> development skills and the developers -> test design skills . There should be NO DELINIATION between the two ( and why Google advertises for QA Roles as Software Engineers ‘In Test’. They are still developers)
    Cem Kaner:
    I already mentioned his hint at code-level regression. One needs coding skills for that.

    So you see, all these experts in the field see the problem, and big players are doing something about that problem. The responses here show that the mindset is that of holding on to the past and keeping the deliniation bold and black. Well guess what? Time will pass you by (if not already). My sister applied for a QA engineer role in Apple last year. She was asked about kernel coding and testing. Yes, technical acuity as sharp as the developers are being sought. Eventually, we all own the quality of the code and the development of the code.

  7. H. Hamid says

    @ARC & Shahied Luddy,
    The arguments presented by both of you make much sense in their given context. However, one thing which really bothers me (this should be discussed in a separate thread too), that are we providing enough space, acceptability, guidance and opportunities to both the roles (testers & developers) to merge and evolve together (conveniently) to face the new challenges? This should be a level playing field for both the sides, and not just one role dominates the other (just because they know how to code).

  8. ARC says

    @Shahied Luddy,
    I guess this really depends on where you worked and the developers you’ve met. In my 20+ career, the best developers were the best testers (paranoid about their checkins’, employed lots of unit and subsystem testing, very up-to-date on test technology). Again, my point of view is from the big internet web services and games( Google, Salesforce, Zynga, etc.). The type you mentioned (no dev experience) are from old development processes (like in HP printer groups where time to market spans 6 months to years amd the development process is a combination of iterative and waterfall). The QA Engineer career has to evolve and be at the level as the developer (and that is happening as in Salesforce, Google) both in terms of reputation and technical acuity and with that, salary. The monkey testing will keep the role at playing at the second banana position (and as I said, this is an HR recruitment position because you get kids out of college who will conditionallyl do QA, but only as a stepping stone to development). Kaner hints at this need: “to test designs driven by models, or to creative code-level regression.”. Code Level Regression or TDD by models. Without DEV experience, how will you speak that language? How will you stress the system with out archintecting harnesses and stubs and mocks in lieu of real systems with a time to market in days or weeks? The USER ACCEPTANCE testing is really in the realm of the Product Engineer and Stakeholder and by everyone in the company. The QA Engineering field is evolving. The bigger players are seeing to it that it does.

  9. Shahied Luddy says

    I tend to agree with: Cem Kaner (on estimation and lack of ‘skills’ training), James Bach (on testers not having any idea on what they are trying to do) & Michael Bolton (on convincing customers that they are creating the bugs).

    I also need to point out the difference here between manual & automation as lots of the responses to this blog I can see already are from automation testers whom state that testers need to know design patterns, development… the works! Maybe for automation testers I do not know but in my experience definitely not always for a manual tester – sometimes the best manual testers are those who do not know anything about development but more on how the end user businesses will work, as they will simply just ask more ‘silly’ questions and this is what I want. I do not want a tester that knows the entire design pattern, etcetera to only end up testing according to what was developed and not according to what the business actually wants.

    In my view probably our (test teams) biggest weaknesses are:
    - Test estimation – for far too long have we been letting developers or developer type project managers decide on how much test effort is involved. We have even excepted their test percentage given in accordance with the amount of development that went into builds…this used to be around 25% test effort in accordance with development time, for example they used to state, if the DEV took 100 days, then 25 days is fine for testing! At my last company I worked for I actually changed this to more like 40% of DEV time and in some cases close to 50% of DEV time, especially when new technologies were involved that our developers were never exposed to ever. Testers will also not have any specific formulas to work from, sure the ISEB preaches the 1-2-3 Rule in test estimation, but like Cem Kaner stated, this is very basic, which basically resulted in me just creating my own little formula [of which I will not explain here now as it will make this blog way too long :-) ];
    - Testers not being able to read and understand a functional specification and compiling test cases thereof – even the quality of the test cases;
    - Testers that really just do not know what they are testing and that are merely just following test cases with test steps created by an analyst;
    - Testers that think they know it all and that are not willing to learn or adapt to change – never ever state that you know it all!;
    - ‘Developer’ type testers that tests according to how the application was developed and not according to what the end user business actually wants;
    - Picking the correct resources at the interview process (I do admit this is very difficult as you do get some good ‘talkers’ out there) so always good to give them an exercise to do to back up what they state in interviews – this can be anything from a 1 hour testing of a ‘Login’ screen where they need to state in summary what test cases & steps they followed to a full ‘take home’ exercise and giving them a week ‘part time’ to complete (and them asking them questions afterwards on this exercise to make sure they understood what they were writing and that one of their ‘buddies’ did not just do it for them).

    Recently when I left my previous company I had to give them some pointers on what to look for in a tester for future interviews and here are just some of what I stated (that might also help in getting rid of some of the weaknesses – linked to the last weakness I stated mostly I guess):
    1. Give them a test to do of any test application together with a requirements specification. This test will allow you to see if the individual can read specs properly + how their writing skills are and if the next person can understand it or not and do they test for the correct things;
    2. What I normally would look for is as stated in point above + how they would be able to mentor, guide and communicate project information verbally as well as in writing.
    3. ONE OF THE MAIN POINTS: is to remember that the individual does not necessary have to come from a DEV background, in fact in some cases I prefer them not to and I had my reasons, with the main reason being that normally a DEV tester would test according to how the application was coded and not what the actual spec stated. A DEV Tester also does not always think of the usability of the application and user friendliness, etc. My testers need to think like a tester and not a DEV!
    4. Another point: I’d prefer a tester to disagree with a DEV rather than just agreeing to everything a DEV states (you could somehow add that to your interview questions – whoever does the interviewing);
    5. Very important to always have an existing tester in the interviews as well and not just DEVS.

    I hope all the above helps :-)

  10. H. Hamid says

    I am not sure the biggest weakness, however the biggest strength would be empowering a Tester to raise concern and questions about anything and everything designed and developed during a software development life cycle. An empowered tester will be able add more value to a project than a dummy tester. So, in my view having less empowered dummy testers in your team would be the biggest weakness.

    @Mihai – If we all agree that testers should be writing code just like developers and there should be no boundaries, then why to limit the testers to the skill of coding only? I believe that testers should do business analysis, technical documentation, requirement analysis and other likewise roles\activities involved in a software development life cycle and there should be no boundaries at all! (UTest, this is a topic worth discussing too)

  11. Mihai says

    “The Skill” is the corner stone on performing the testing.
    The common practice is this: “You got a skilled Tester? Baptize her / him as Developer.”
    Today, a tester needs to have skills which are developer’s specific. (And the other way around too!) Not only to be able to talk the same language, but to be execute the same activities.
    The tester needs these skills to be able to build her / his own tools, to be able to debug, to be able to develop the tests scripts before the developer implements the functionality.
    In most companies the testers are second hand member of the development team (in Agile) and they do not participate in the design and implementation process.
    This is the reason the testing happens at the end of the Scrum! Not the process, but the testers capability to be involved in the design and implementation.
    The skills difference between testers and developers should diminish. All of then needs to perform all the activities related to Software Development: Design, Implementation, testing, etc.
    Am I too extreme?
    The practice will decide.

  12. Dayle Fish says

    We need to get this these thoughts open to the entire community. Testing has always seems to be in the reaction mode since design is always first to acquire new tools/ways to build. Production Managers need to sense this and prep testing very early on in the development cycle so they can broadcast the alert for new training requirements/concepts. Methodology needs to be able flux and of course there is always $$$$. What is available to support this requirement? How is it to be done? There are only so many hours in a day, this also adds to the problem. These problems have to be fixed with in. You have made a commitment to an employee, they are now in your culture, challenge them to ramp up to new ideas, provide them opportunities. It is a win win situation.

  13. Mitch says

    Terrific article. I used it at the bi-weekly Test Meeting for all the testers in my company. First we discussed each of the guest answers and whether they were a problem for us. It sparked a lot of ideas.

    Then we went around the room and everyone raised their own “Biggest Weakness in the way *our* company tests.” It brought to light many issues… “testers not cross-training enough, getting too specialised,” “too much multi-tasking, need to schedule time better for testers not to be interrupted so much” and so forth.

    Finally we came up with actionable tasks to try and address each point, and we’ll follow up on our progress in the next bi-weekly meeting. As the wise man says, “I don’t want to make the same mistake twice… I want to make *all new* mistakes.”

    Thanks for sparking these ideas and helping to make my testing world a better place.

  14. Javed Kutty says

    Many of the weaknesses have been very well articulated and I agree with .
    One of the big weaknesses which I have seen is the testing group working in isolation of other groups. There is an eco system which testing teams need to work with and leverage and many a time this dependency is not well managed. Require to take a life cycle view of testing rather than a “siloed” view

  15. says

    In my humble opinion, the biggest weakness in the way companies test, is that “Testing” is always considered as a support unit rather than delivery unit by management. As result of this, none of the test artifacts are ever published / delivered to the customer. Due to this -

    1. Testing becomes a lower priority job rather when there is a deadline pressure.
    2. Management is not worried about quality of test artifacts and hence in turns hire less skilled workers or don’t care too much on applying relevant metrics.

  16. says

    In my 1.5 years experience( it should be 1.5 years of learning), i feel that the “Passion” for Software Testing is the biggest strength and it is the biggest weakness as well depending on the context. Most of the testers are very passionate about it and some are there just to prove they have something to earn their livings.

  17. says

    edit: (and the lack of skill in either the testers or the companies and how they approach this if it’s needed; which is how i read this article)

  18. says

    @ARC, an that makes more sense. My mental kingdom for testing is geared towards gaming in particular, but certainly internet and mobile devices are very different than projects with 2 year+ lead times from design to the consumer. Automation testing is fantastic, but non-automation testing (and the lack of skill in either the testers or the companies and how they approach this if it’s needed) is and probably will always be necessary for many projects. Anyway, back to work!

  19. ARC says

    Thank you for your opinion, but I guess my assessment has to do more with the internet and mobile application development activities where scrum and other rapid development process are employed (one/two week sprints). The biggest players like Google, Facebook, Zynga, Salesforce have very highly technical ‘test’ engineers (who can a lot of times run circles around new hot shot developers just coming in terms of coding efficiency and coding standards and algorithms). User Acceptance testing is performed by all (‘eat your own dog food’). It is a changed world, I am sorry to say, and those who hang on to the past are quickly left behind. Why do you think a ‘lack of skill’ is mentioned here at all?

  20. says

    @ARC, I have to disagree with your assessment. I think a tester should be familiar with things outside their scope and in scope for engineers or programmers, but they shouldn’t have to know how to code or otherwise have a fluent conversation about C++, bash shell, or whatever else. Test Engineers are great (those that can code, etc.), and needed, but if they are all the make up your testing group you run the risk of having them not think like an end user, and in the end is the end user case that is most important (re: Richards point). I will take 5 great testers and a truly great test engineer every time over 5 “great” testers who are great because they can also use a bash shell or some other such technical prowess.

  21. Deniese Chinnappen says

    I tend to agree that much more so with Bach and Sivak, but I want to add lack of passion and creativity to that.

    In my 13 years working in a testing field, I’ve had the opportunities to make mistakes and learn from them, learn from other’s mistakes, but more importantly channel this towards further developing my technical testing skills.
    In the early days when testing for us meant understanding the inner workings of the software, ensuring we were prepared to handle what was coming, and really getting in to it, it made all the difference. We never had crap quality software, and it wasn’t the best either, but the highly-skilled testing coupled with a creatively passionate mind is what added that cherry on the top.

    Without the passion to keep pushing the limits and the creativity to come up with innovative ways to break software and the cliched “think-out-of-the-box”, it’s the same-old, same-old.

    Add to the mix, that quality is expected to be tested in and you’ve got a ticking time-bomb, waiting to go off at the most inopportune time…and I really do mean “inopportune’ ;)

    We often here from management that quality is everyone’s problem and we all need to own it, but come on, let’s be realistic here…it seldom is only just the testing team that’s left holding the hot potato. In Sivak is right, it’s easy to state, but that much harder to put into practice, as clearly we are a necessary evil. Like he so boldly points out about defects…’their presence, never their absence’…

  22. says

    When doing manual testing the role of bean counters overrides intelligent testing. Since matching the “numbers” goal is paramount simple test cases are the design rule.
    Real productive testing takes time and an immersion into the software under test. Taking the time to do testing right is not popular with management that sees testing as an obstacle to releasing the product.
    Hidden in the background is the ability for the tester to represent different user types in addition to the “standard” scenarios. User type testing including naive, power and malicious users exposes different issues. Simple by the “numbers” test cases will not expose many of these issues.

  23. ARC says

    Lack of Skill. I’ve interviewed lots of QA Engineers, and many have different skill sets, some surprisingly with very little technical programming experience. I hate to say it, but I think the Test Engineering field has to evolve into one that a tester cannot be in the test role unless he/she has development experience on the language of the group he/she is supporting (one who can write unit test, implement continuous integration build and test process, find holes in the code, etc.). That, plus experience in the market the product is targeting so one can think outside the box and form test cases around the questions that the product is trying to solve. Google does this, hiring Software Engineers ‘in test’ (they code heavily to test). So this ‘lack of skill’ is really a recruitment/hiring issue. Nowadays, you almost have to hire someone who is a developer with a passion to test. How easy are those to find?

  24. Craig Otto says

    I have a question for Tim Coulson. In your reply you mention a course. I am wondering what course you are talking about. I am that guy you mention in your reply. I was stuck in a testing position with NO testing experience what so ever. Here is a computer now figure it out. I like doing it but I need more direction and skill sets. What would you suggest in any way of this.

  25. says

    I think Dan Bartow hinted to what I consider one of the biggest weaknesses in testing. Many companies was large amounts of time building test tools and testing environments that should be used creating more tests. There are a plethora of SaaS and PaaS products out there today that are relatively cheap (a fraction of an A+ test engineer’s salary) and more importantly, maintained by entire companies.

    For example, I see absolutely no reason why a company needs to build out it’s own Selenium server farm when there’s Sauce Labs or hosting your own bug database when there’s Atlassian OnDemand. There’s even a service out there called CloudBees that handles deployments using Jenkins.

  26. davep says

    By having designers, developers and testers realise that they are all trying to achieve the same thing and getting together and discussing how each other sees the finished project at an early stage can get rid of a lot of defects and help to create better tests

  27. says

    My 25 years of experience has taught me that the main problem is 2 fold:
    1) A lack of fundamental skills.
    2) No aptitude for software testing.

    Like a great athlete, it takes both attitude and aptitude.

    I agree that lots of folks can do the job in “OK” fashion. But the great ones are few and far between. The great ones become great because of their attitude and aptitude.

    There are very few if any degree program focused on Software Testing. I see companies take virtually anyone, stuff them into the role and expect them to perform. Then the only choice they have is to figure it out as they go.

    I find it very troubling when I see seasoned testing professionals that really don’t have a clue about testing. They have 10 years of experience, but it is 10 years of experience in doing it poorly.

    Once we stopped and looked deeply at the issues and real root cause, the course became crystal clear. We seek people (experienced or not) that have the right attitude and aptitude and we train them in rock solid foundational skills and concepts. These are a combination of text book and real world concepts on how to get it done correctly regardless of the culture and constraints.

    This model has confirmed that analysis through exceptional throughput and quality of deliverables to our clients. These folks may not be seasoned professionals, but they are great at the work of software testing.

  28. says

    Excellent article and insights throughout. I can think of one thing that is associated with a few of the insights above, but not directly discussed. The pay for testing is often quite low, and I think this is because of the point of view held by developers, executives, etc. However, and I’ve seen this time and again, you get what you pay for. There are talented testers out there who can be a god send to a project and company long term, but more often than not you won’t get these testers or, if you do, they won’t stick around long enough to be a valuable member of your company.

  29. Manoj says

    Great article! I agree with most of them. In my view, a tester should be able to speak the language of a developer in terms of architecture of the system, design patterns, component designs, reusability, class structure, algorithms etc…If not, the development team would always see testers as separate entity. To gel well with development team and to become part of the SDLC from start, learn to develop before test.

  30. Clarissa says

    Great insight! I’m trying to work on improving my skillset/knowledge. If there are any other books, articles, conferences, etc., you can suggest, PLEASE let me know.

  31. says

    I’d have to agree the biggest weakness is the lack of a system for acquiring skill. To me that includes the focus on low-skill testing and the lack of references for improving those low-skills including books, conferences, and courses.


Leave a Reply

Your email address will not be published. Required fields are marked *