Back for another round of Testing the Limits is Scott Barber, the Chief Technologist of PerfTestPlus. A speaker, author, teacher and entrepreneur, Scott has one of the most impressive resumes in the business, particularly in the realm of customized testing methodologies, embedded systems testing and personal security systems, among other topics. To learn more about Scott, you can check out his background, or better yet, read his blog.
In our latest interview, we get his thoughts on the unhappy software tester; things about testing that need to die; the difficulty of the agile transition and other topics. Enjoy!
uTest: You sparked a lively debate in response to an article that claimed the happiest job in the world is that of a Software Quality Assurance Engineer. For those who haven’t read it, what was your main objection? And what, if anything, can testers do to be happier at their jobs?
SB: Yes, yes I did. I mean, seriously, doesn’t a bunch of testers taking an anonymous survey in which they basically all agree that something is *good* sound a bit suspicious to you? After all, we are all professional critics! Besides, at least in my experience, there are a whole lot of testers who feel under-appreciated, under-paid and over-worked. But those objections are superficial at best since I don’t have anywhere near enough data about any (let alone *every*) other job. I will say that I *hope* there are at least some jobs where a higher percentage of folks *seem* happier.
My *real* objections, however, go like this:
- The claim was based on a survey conducted by CareerBliss.com… basically a job board with the tag line “Find a Happier Job” — as a tester, this alone causes me to question the objectivity of a happiness survey conducted by “Happy People” targeting people looking for “Happier Jobs”.
- The claim was based on a survey of a little over 100,000 total responses — which begs the question, how many “Software Quality Assurance Engineers” are there in the sample? I’m thinking 1% would be on the high side… like *way* high. That doesn’t seem like a statistically significant sample to me.
- To take the survey someone had to be on the site in the first place, indicating a strong likelihood that the person was looking for a job in the first place. I don’t know about you, but does it strike you as odd to claim that *any* job is the “happiest” based on a survey of people who are looking for new jobs? I mean, why would you be on a job board if you were happy with your job?
- The description of what a “Software Quality Assurance Engineer” does is provided by some “expert” neither I, nor anyone who I’ve reached out to about this piece, has ever heard of. The description is both all over the map, and mostly not representative of what the Software Tester or QA Teams that I know do. After taking the survey, I find several more, uh, bugs with making such a claim from this survey, such as:
- The job title field was a free text field, so who decided what self-entered titles got rolled up into “Software Quality Assurance Engineer”?,
- Since the “expert” who provided the job description for the article was contacted *after* the survey results were compiled is there *actually* any correlation between what the survey takers whose responses got bundled under “Software Quality Assurance Engineer” do and what the article says they do?
I actually took the survey 3 times, once thinking about my latest long term client, once imagining my “dream” testing job and once imagining my “nightmare” testing job… and realize that almost 75% of my answers were *exactly* the same and my “score” ranged from about 3.75 and 4.25 on the “happiness” scale. This, of course, indicated to me that very few of the questions had any direct relevance to what makes a testing job “happy” or “miserable” — at least by my standards. Basically, what I object to is questionable objectivity, statistical insignificance, unspecified methods of identifying who/what a “Software Quality Assurance Engineer” is/does and self-selection bias. I mean, I am a tester right?
Now, lots of folks have tried to make me out to be a “bad guy” by claiming that my point is that testers don’t love what they do. Which is unfortunate because I do believe that folks who are seeking *another* testing job, love testing. This leads to yet another ambiguity in the whole survey & conclusions. Is the implication that we’re very happy doing testing, or that we’re happy employees who happen to be testers? ’cause I know a *lot* of folks who love testing, but who are also really unhappy employees in general.
And as to your second question, I think the key to being happy in a testing job is to be happy as a tester and to stop worrying so much about whether or not you like the decisions the business is making with the information your provide.
uTest: On a similar note, you recently participated in our Testing Roundtable discussion where we asked the question: What do you like most about testing? I suppose we should now ask you the opposite: what do you like least about testing?
SB: Cool, an easy question! What I like least about testing is the fact that so few people really embrace what testing is and what makes it valuable to whom. I admit that there are days when I find myself wishing I’d stuck with Civil Engineering so that when people ask me about what I do, I could say something like “I design parking garages.”
uTest: One of our favorite posts from your blog was the piece from 2011 you did on 10 Things About Testing that Should Die. One of those things was egocentricity among testers (e.g. testers who believe that product revolves around them). Would you agree that most times testers don’t mean to overstep their bounds? If so, what are some ways testers can prevent themselves from encroaching on the domain of others?
SB: Testers want the products they are testing to be very good. This is a positive trait for a tester. I mean, really, what kind of information would you get from a tester who wanted the product they are testing to be mediocre? The companies who employ testers also want the products being tested to be very good. The problem is that the companies and the testers tend to have *very* different definitions of “very good”. Testers want products to be free from bugs. Companies want products to be as profitable as possible as quickly and cheaply as possible for as long as possible.
The egocentricity problem comes in when testers start thinking that their definition of “very good” is right, the corporate definition is wrong, and that it’s their duty to “set the company straight”. In reality, this is not a right/wrong problem. The problem is that the employee and the employer either do not understand one another, or simply do not have compatible value systems.
Testers, like all employees, have the duty to take appropriate action against illegal or unethical behavior by their employers. Any decisions a business wants to make that aren’t illegal or unethical is entirely, well, their business. If you are a tester and find yourself tempted to try to “set the company straight” I recommend redirecting that energy into finding a job with a company that embraces values more in line with your own. The way I see it, either way you’re going to end up with a new employer. The only difference is whether you’ll be leaving with a crappy rubber chicken luncheon or crappy severance package.
uTest: Back to that list for second: It’s been over a year since that was written. What else about testing needs to die?
SB: I’d love to say something like “greedy vendors charging insane prices for products that don’t function as advertised”, but I guess that’s not very specific to testing, is it?
What really needs to die about testing is all of the proclamations, unsubstantiated generalizations, closed-mindedness, argumentativeness, debating, personal venom and rubbernecking surrounding the “right” or “best” way to test. I just keep thinking that if all the people touting “right” or “best”, all the people countering them and all the people who can’t seem to look away (probably due to the entertainment factor) would just forget about “right” and “best” in favor of collaboratively figuring out how to produce higher quality software faster and cheaper, maybe, just maybe, we could start making some real progress.
I’ll let you in on a secret. There is no one “right” and there is no single “best” and there never will be. Just like there is no one right or best way to mow the lawn. Some folks do it themselves, others pay someone to do it for them, yet others assign it as a chore for their children. Some use push mowers, some use ride-ons, some even use rotaries. Some people like the concentric square pattern, others like the cross-cut pattern. Who is right? Which is best? Honestly, who cares?!? As far as I’m concerned, there is a very, very wide range of “non-offensive” that the lawn owner can choose before I can even imagine making myself pretend to care. I have my own preferences, sure. But “right” or “best”?!? Can we all just stop it with that now? Please?
uTest: True or false: Transitioning is the most challenging aspect of the Agile methodology.
SB: Assuming that you’re starting from something else? True.
Honestly, I’m not entirely convinced an established organization *can* transition from a functional, but fundamentally “un-Agile” methodology to a functional and fundamentally Agile methodology without experiencing so much cultural change and personnel turnover as to make it virtually unrecognizable as the pre-transition organization. I think that equates to “hard”.
uTest: You wrote a chapter in the popular book How to Reduce the Cost of Software Testing where your audience was executives. We’re going to ask you to generalize for a second: What is the #1 thing that test executives don’t understand about testing?
SB: Executives do not understand that software testing is no different than testing in school. Educators assess student’s knowledge, retention, and synthesis of the material via tests. The results of the tests provide information to the teachers/educational institutions to help them make decisions such as “Is the student prepared for the next phase of learning?”, “Should more time be spent on this topic?”, or “Has the student achieved sufficient learning for us to feel proud of them carrying our institutions endorsement as having completing program X?”
Testing software is the same. I could take the analogy much, much further, but I’ll spare you. However, I’m going to challenge you to think about that further and see if that leads you to any interesting insights while you patiently wait for me to put the finishing touches on this thought process and post it on my blog. (I know that’s teasing, but that’s why they call statements like this “teasers” <grin>).
Editor’s Note: We hope you enjoyed our latest Testing the Limits interview. If you have someone in mind for our next guest, email us at firstname.lastname@example.org.