What better way to end a great year of Testing the Limits interviews than to pick the brain of Matt Evans, QA Director of Mozilla.His 20+ years of software testing experience include stints at Palm, where he managed the quality program for the WebOS Applications and Services of the Palm Pre smartphone, as well as Agitar Software, where he helped pioneer automated test generation from Java source code. Today, Matt is recognized as one of the foremost experts on open-source development and crowdsourced testing.
In Part I of our must-read interview, we get his thoughts on the diversity of the testing profession; the importance of developer-written unit tests; the evolution of Mozilla’s testing community; the biggest myths of crowdsourcing; the unique challenges of mobile testing and more. Be sure to check back in tomorrow morning for Part II of the interview.
uTest: You’ve been all over the tech spectrum throughout your career: You’ve worked at web companies and mobile companies, startups and enterprises, open source and closed source. But during that whole time, you’ve always been involved in testing and QA. What’s kept you in the testing space for the last 20 years?
ME: There are several reasons. First of all, software testing is a huge challenge and it takes a lot of different intellectual skills to do it right. It really boils down to asking the right questions about the application under test and continuing to do so. Testing an application well requires you to look at functionality from a lot of different perspectives: What are the different types of users? How will they go about using your product? In what different environments and conditions will users expect your application or product to work in? Drilling down on these questions and ultimately coming up with the test cases and test data to ensure you are adequately covering these conditions has always been very stimulating and rewarding to me throughout my career in the testing field.
Secondly, the exposure you get in the testing field is incredible. You can typically explore various technologies incorporated in an application and get well-versed with each. In fact, to do the job of a tester well, you are required to get a solid understanding of the technologies used in creating the application and the influences of the environment where the application is intended to operate. The more you understand these technical aspects of the application under test, the more you will know what are functional dependencies and environmental conditions that you must test the application under. In addition, you also must interact with the many players and stakeholders of a project. Obviously, at the top of the list are the users and customers of the application. You will need to understand their expectations and usage of the product, and translate those into testable use cases. Your relationship with development is also key. Providing the developers timely, contextual, and actionable feedback on the health of the application is critical to any software release. The exposure to technology and the various players on software projects have been key to my continued passion with software testing.
Lastly, there have always seemed to be good opportunities in the software testing area, whether it be traditional black box testing, test automation, or testing tools development. In my experience, the need for good qualified test professionals at every level has always been pretty consistent in good or bad economies.
uTest: For a mainstream web app in 2010, what’s the appropriate mix/interplay between automated functional testing and manual testing (both test case execution and/or exploratory testing)?
ME: It really depends on where the state of the software project is at. Hopefully, testing and test development are done at an early stage of the product life cycle and you have the time to develop test cases and write them as automated tests. With respect to automated tests, in my experience most of the new bugs are found at the point of developing the test cases and the initial runs of the automated scripts. Once these tests are running correctly, their future value is directly related to how often they are run on the updated code base. Ideally, they are run on the developer’s desktop before check-in as well as part of continuous integration. Actually, these days I think you are at a great competitive disadvantage if you don’t have a robust practice of developer-written unit tests and functional automated tests, all under the control of a continuous integration system. If you don’t have that in place, you need to invest in that now.