Testing the Limits With Matt Evans from @Mozilla – Part I

What better way to end a great year of Testing the Limits interviews than to pick the brain of Matt Evans, QA Director of Mozilla.His 20+ years of software testing experience include stints at Palm, where he managed the quality program for the WebOS Applications and Services of the Palm Pre smartphone, as well as Agitar Software, where he helped pioneer automated test generation from Java source code. Today, Matt is recognized as one of the foremost experts on open-source development and crowdsourced testing.

In Part I of our must-read interview, we get his thoughts on the diversity of the testing profession; the importance of developer-written unit tests; the evolution of Mozilla’s testing community; the biggest myths of crowdsourcing; the unique challenges of mobile testing and more. Be sure to check back in tomorrow morning for Part II of the interview.


uTest: You’ve been all over the tech spectrum throughout your career: You’ve worked at web companies and mobile companies, startups and enterprises, open source and closed source. But during that whole time, you’ve always been involved in testing and QA. What’s kept you in the testing space for the last 20 years?

ME: There are several reasons. First of all, software testing is a huge challenge and it takes a lot of different intellectual skills to do it right. It really boils down to asking the right questions about the application under test and continuing to do so. Testing an application well requires you to look at functionality from a lot of different perspectives: What are the different types of users? How will they go about using your product? In what different environments and conditions will users expect your application or product to work in? Drilling down on these questions and ultimately coming up with the test cases and test data to ensure you are adequately covering these conditions has always been very stimulating and rewarding to me throughout my career in the testing field.

Secondly, the exposure you get in the testing field is incredible. You can typically explore various technologies incorporated in an application and get well-versed with each. In fact, to do the job of a tester well, you are required to get a solid understanding of the technologies used in creating the application and the influences of the environment where the application is intended to operate. The more you understand these technical aspects of the application under test, the more you will know what are functional dependencies and environmental conditions that you must test the application under. In addition, you also must interact with the many players and stakeholders of a project. Obviously, at the top of the list are the users and customers of the application. You will need to understand their expectations and usage of the product, and translate those into testable use cases. Your relationship with development is also key. Providing the developers timely, contextual, and actionable feedback on the health of the application is critical to any software release. The exposure to technology and the various players on software projects have been key to my continued passion with software testing.

Lastly, there have always seemed to be good opportunities in the software testing area, whether it be traditional black box testing, test automation, or testing tools development. In my experience, the need for good qualified test professionals at every level has always been pretty consistent in good or bad economies.

uTest: For a mainstream web app in 2010, what’s the appropriate mix/interplay between automated functional testing and manual testing (both test case execution and/or exploratory testing)?

ME: It really depends on where the state of the software project is at. Hopefully, testing and test development are done at an early stage of the product life cycle and you have the time to develop test cases and write them as automated tests. With respect to automated tests, in my experience most of the new bugs are found at the point of developing the test cases and the initial runs of the automated scripts. Once these tests are running correctly, their future value is directly related to how often they are run on the updated code base. Ideally, they are run on the developer’s desktop before check-in as well as part of continuous integration. Actually, these days I think you are at a great competitive disadvantage if you don’t have a robust practice of developer-written unit tests and functional automated tests, all under the control of a continuous integration system. If you don’t have that in place, you need to invest in that now.
Continue Reading

Essential Guide to Mobile App Testing

Testing the Limits With Jeff Papows – Part I

What an honor it is to have tech giant Jeff Papows as this month’s guest for Testing the Limits. As the former President and CEO of Lotus Development Corporation, Jeff is widely credited with having taken Lotus Notes from its initial release to sales of over 70 million worldwide. Currently the CEO of WebLayers, Jeff’s career has also included stints as CEO of both Cognos Corporation and Maptuit. You can read more about his background here.

A frequent guest of CNN, Fox and other television networks, Jeff is also a successful author – having sold more than 80,00 copies of his first book “Enterprise.com: Information Leadership in the Digital Age.” In this interview, we ask Jeff about his latest book Glitch: The Hidden Impact of Faulty Software in addition to other hot topics in the world of software quality. Check back tomorrow for Part II.

uTest: Let’s start from the beginning: What prompted you to write this book? Was it a bug that just made you snap one day, or did you reach a tipping point after years of observation?

JP: Well in the end, busy CEO’s write books when circumstances and industry trends pressure you to make a “complete” intellectual contribution to a big problem or trend that you feel compelled to respond to.  There are three issues at the root of a meta-level industry crisis I feel is mounting at present.

  • Technology saturation or ubiquity – As of the first of this year we have a trillion devices connected to the Internet, a billion transistors and or microprocessors at work for literally every human being on the planet and thirty billion RFID tags in motion communications with our computing typologies.  Technology is not just a business to business staple anymore – it is truly part of the social fabric of the way we work and live.  With this kind of complexity curve and economic contribution any large scale disruption is monumental.
  • Loss of intellectual capital – About 70% of the world’s application inventory and the platform for the majority of our transaction processing is written in Cobol and run on IBM mainframes.  The other side of the Dot Com bubble bursting is that graduating computer science majors and or math majors are off by about 37% and those that are graduating are interested and versed in Java, C++, etc. not Cobol.  Also, for the first time in our careers/lifetimes, C.S. engineers are retiring, aging and dying.  So how do we replace that codified knowledge from walking out our doors?
  • Mergers & acquisitions – In the period following the financial downturn of 2008, the financial services sector has gone through a lot of consolidation.  The result is in part the added complexity of slamming together these complex back office systems in our major banks and financial institutions.

When you combine these factors together, the recipe is complete for the digital equivalent of the perfect storm.  To answer the question, that is why I wrote “Glitch”.

uTest: Your book deals with glitches and bugs from both ends of the spectrum – some serious, some funny and some that are almost unbelievable. What was the worst (as in most damaging) glitch that you came across while researching?

JP: That’s easy.  The human suffering and deaths caused by software glitches in Varions Cancer radiation medical equipment is the worst!

uTest: What was the worst glitch that didn’t make it into the book?

Continue Reading

Essential Guide to Mobile App Testing

Testing the Limits With Ben Simo – Part III

In the third and final installment of our Testing the Limits interview with Ben Simo, we go back in time to the early 90s to find out how and why he entered the testing profession. We also rapid fire some questions on his browser of choice, his hardware preferences, hobbies and more. In case you missed them, here’s part I and part II.


uTest: Let’s go back in time for a second: How did you get into the craft? What was the first application you tested? What was testing like back in the early 90s?

Simo: Providence. It was providence that got me into testing.

I was young, in love, and planning to get married.  I had been doing some part time database development work, but needed a full time job before the wedding.  I submitted letters and resumes to dozens of companies. I was willing to do almost anything that would pay the rent.  I lived in a city where the local job market was dominated by defense contractors. I quickly learned that many of them called nearly everyone who applied for anything in for an interview; so they could learn about people and add them to databases of potential hires for matching to work they did not yet have.  These companies would then present these people to the government as their available workforce when bidding on contracts. This made it frustrating for those of us looking for work. It often wasn’t clear, when going in for an interview, if it was for a real job or for a potential position that might come at some time in the future if that company were to be awarded a government contract.

I interviewed with the company for which my fiancé (now my wife of 19 years) Sophie worked. It appeared to be one of those information gathering interviews without an actual position to fill. I was asked a lot of questions but none seemed related to a specific opening.  At the end of the interview, the interviewer said he’d be calling me.  Time went by without any more contact. Nearly a month later, I got a call asking if I could start the next morning.

Continue Reading

Essential Guide to Mobile App Testing

Testing the Limits With Ben Simo – Part II

In part II of our Testing the Limits interview with Ben Simo, we’ll discuss whether you should trust automated testing tools; the proliferation of testers on Twitter; the true meaning of “QA”; how testing evolves differently in each company; the long lost Bach brothers and much more. You can catch up on the conversation by reading part I. We’ll wrap things up tomorrow with part III.


uTest: Jon Bach mentioned that changing the meaning of “QA” to Quality Assistance would help outsiders (engineers, executives, et al) better understand the role of this discipline.  Agree or disagree?

Simo: I believe I first heard  “Quality Assistance”  from Cem Kaner.  I agree with Jon. When testers bear the title Quality Assurance, it often implies that they actually assure the quality of other people’s work. Testers are in a position to help assist quality; not assure it. Let’s not assist the setting of unrealistic expectations with inappropriate titles.

uTest: While we’re on the subject, are you anyway related to James and Jon Bach? The resemblance is uncanny.
Simo: I don’t think so. I’m available for adoption if the Bach family is interested. ;)

uTest: You’ve said that you frequently use automated tools, but that you don’t trust them entirely (back to that whole defensive pessimist thing again). What advice do you have for testers and managers wanting to strike a healthy balance? And what’s currently in your arsenal of automated tools?

Simo: My mistrust in tools is based on the fact that tools can’t think for me. Automated checking can only process whatever decision rules someone thought to program when the checks were created. Automation will consistently do what it is programmed to do and consistently not do what it is not explicitly programmed to do. I find test automation to be useful. In fact, there are some things I’d not want to even try to do manually. I do, however, distrust the green bar. When automated checking passes, I ask myself what the automation does not tell me. I also try to keep aware that people who don’t understand what the automation does are likely to assume that it does more than it does.

Tools are much more than test automation. Tools are essential for testing. I don’t want to test without tools. I have some old programming books that promote testing in which a programmer manually executes code, step-by-step, with pencil and paper in order verify that the code works as expected. This is manual testing. This is a testing practice that came from a time when computer time was rare and cost more than people. We’d now laugh at someone proposing testing in this manner.

Continue Reading

Essential Guide to Mobile App Testing

Testing the Limits With Ben Simo – Part I

Our Testing the Limits guest this month is Ben Simo. Known as the “Quality Frog” on Twitter, Ben is one of the most insightful and entertaining testers in the business. A proponent of the context-driven school, Ben has more than 19 years of experience testing software and developing testing tools. He currently lives in Colorado with his wife, two children, two dogs, five cats and fourteen – count ‘em – fourteen goldfish. For the full Ben Simo experience, go to his blog.

In part I of our interview, we get his thoughts on the Worst Bug Ever; his testing philosophy; what it means to be a defensive pessimist; testing certifications, the state of the industry and more. Be sure to check tomorrow for part II.


uTest: Your “Is There a Problem Here?” series has been a big hit in the testing community. What’s the absolute worst bug that’s ever been submitted? And what can testers and developers learn from these type of mistakes?

Simo: Many of the bugs on IsThereAProblemHere.com could be argued to not be bugs. The software works or catches and reports an error condition; but in a way that it unnecessarily frustrates users. My hope is that people involved in creating and testing software can learn from these examples. Rather than only look for the obvious technical bugs, we need to be asking ourselves “Is there a problem here?”

We build software for the benefit of people. Software fails when it does something other than solve human problems.  Although not the worst items submitted, two items come to mind.

The first occurred on Christmas Day last year.  Twitter was full of complaints by people who received Sony’s new electronic book Reader device as Christmas gifts. The device worked except that Sony was not prepared for the Christmas Day rush on their servers as people attempted to install software and purchase books.  By not sufficiently preparing for the Christmas rush on their servers, Sony turned joy into frustration for many new customers. As a performance tester, I take this as a warning to seriously consider what events may cause a surge of demand for the systems I test.

The second problem that comes to mind is one I’ve repeatedly encountered with Blogger’s auto-save feature. I like features that help prevent users from losing their data.  While auto-save features usually indicate that software designers value their customers’ data, Blogger provides a great example of how auto-save can make things worse.  The Ctrl-Z undo option in users’ web browsers goes away after an auto-save occurs.  If a user fat-fingers text in a way that deletes content just before an auto-save occurs, there is no going back. An accidental Ctrl-A instead of a Ctrl-Z or Ctrl-X followed by another keystroke can permanently delete a document in an instant.

uTest: Gotta ask about the “Quality Frog” handle on Twitter. What’s the origin of this moniker?

Simo: A few people have told me “Quality Frog” looks like two random words from a Facebook captcha.

Continue Reading

Essential Guide to Mobile App Testing