In this month’s installment of “Testing The Limits”, we sit down with Matt Heusser (@mheusser) — prolific blogger for STPCollaborative, thought leader and testing extraordinaire. We’ll discuss the state of software testing, SpeedGeeking, the role of chaos in testing software, and the lack of fistfights at STPCon 2009
uTest: We loved the SpeedGeeking session you led at STPCon, so we’re going to flip it on you – If you had just five minutes to teach, motivate or inspire the uTest audience about software testing, what would you say?
MH: Well, I’d start by asking the audience what they are doing today – what’s the greatest point or opportunity they feel – and asking what options they see to improve. Most of the time, I hear that testing is “too slow” or “the bottleneck” or something like that.
So I suggest taking two weeks and actually measuring how the team is spending its time. Oh, not for reporting – it is very important the team stop the time tracking after two weeks and not hand individual metrics into management for evaluation. Instead, we want to use the numbers for improvement. For example, many of the people I talk to can spend 80% of their time or more in meetings, working on documentation, working on compliance activities, doing email, and so on. That only leaves 20% of the time to test! Just pushing those numbers from 80/20 to 60/40 will double the amount of time the team spends actually doing testing.
Another thing to look at is the amount of time spent trying to reproduce defects, document defects, file bug reports, “verify” fixes, and so on. We think of these activities as testing, and they can take a substantial chunk of that 20% – but they are really accidental. That’s not a testing bottleneck – it is a development bottleneck. If test can work with development to improve the quality of the software prior to code complete, that will improve the speed of the whole system. Realizing this, and having a little bit of data to “prove” it, can help the entire system improve.
So if I had five minutes, I would say start with measuring how you track your time, and ask yourself if this is the best use of your time and what can change. Sometimes, the big boss will say “no, we absolutely need you to fill out all seven pages of documentation per test run”, and you can say “ok.” Six months from now, when someone asks why the big project is late, you can point out that the business made an explicit decision to pay the full price of defined process. You presented options and those were not accepted.
That won’t save this project — but it might save the next. It also turns out that actually testing tends to be much more fulfilling than documentation and compliance activities. Who could have guessed?
Lots of contrasting opinions at last month’s STP Conference. While there were no fist fights (that we heard about anyway), what did you see as the most contentious issue? And where do you fall on this issue?
MH: I’m a little sad that we keep talking about best practices vs. context, scripting vs. exploratory. Of course we need a balanced breakfast of approaches to testing, and of course, “best practices” are a marketing term that do not exist in the engineering literature. Personally, I’m most embarrassed that you could get the crib notes from any conference from 2004 – or possibly 1999 – and see the same arguments.
Some of the problems are due to personalities. There are a few people who just don’t give past credit for ideas, or make wild claims without having actually done much software testing. Where do I fall on the issues? Well, let me first say, if anyone claims to be an expert on software testing, one place to start is to go to LinkedIn, look at what they actually claim to have done, and send out a few emails and try to verify those claims. If you can’t verify those claims, or realize they are written in a very specific way as to be non-falsifiable, well, that tells you a lot.
I was impressed that we managed to not fight about certification at the conference. There was no certification course before, after, or during, and we didn’t have to spend our time debating it’s merits or lack thereof – we mostly talked about, you know, actually testing and stuff. In that, I was pleased.
Congrats on getting your blog “Testing at the Edge of Chaos” exclusively featured on the STP Collaborative. What does it mean to you to test at the edge of chaos? What ideas are you most interested in getting across to the tester community?
MH: Uh, I think my blog has something to do with testing. And, um, like, something to with Chaos, or something. Seriously, started blogging in 2001 or so to express myself and my ideas. The last iteration of that was my “Creative Chaos” Blog, where I talked covered creativity and innovation in the software process. “Testing at the Edge” is a little more tester-focused, with the goal of covering skills and innovation in the software testing space.
It seems to be that every year’s a new batch of graduates come out of MIT, Carnegie-Mellon, and the University of California at Berkely that know how to automate defined business processes – and take a look at testing and say “gee, there is a defined process, we should automate it.”
Four or five years later, they’ve learned a bit and turn around and say things like “gee, some of testing can be automated, but part of testing an investigative, feedback-oriented process.” Then every May a new batch graduates and we start all over again.
I think “Creative Chaos” did a good job in reaching out to that audience, and with the publication of “Beautiful Testing”, we may be finally out-growing it. With the new blog “Testing at the Edge”, I hope to move into specific examples of good testing, how to get better at it, and how to set some appropriate boundaries around the testing activity – and discuss what those should be.
What’s the deal with your Testing Challenges? Are they still on-going, or have you simply run out of apps to test?
MH: I generally run test challenges in private, sometimes electronically, as part of a mentoring or training program. I do this both commercially and non-commercially, as part of my zero-profit “Miagi-Do School of Software Testing.” Recently, I’ve been asking people for what they want, and the idea of running test challenges publicly – and sharing the answers – keeps coming up. I ran one in October (link) that was well-recieved. Sure, I can do more of them if there is interest.
As I work full-time for software product company, I don’t see us running out of apps to test anytime soon. (Laughs)
A quick hypothetical: You’re banished from the software testing industry for five years. What do you do during that time? And don’t say developer.
MH: I’d probably challenge the authority of who it is that is trying to banish me! But I will take your question in the spirit you intended it, and answer what my next career choice would if, for some reason, I chose not to test. That would likely be writing about technology or business.
When you think about it, the investigative journalist shares a lot with the tester. Journalists are paid to look around, find something that seems correct on it’s surface but has a problem, and uncover information and evidence. They take careful notes, they work on projects that are different every time, think critically, and are guided by rules of thumb.
A surprising number of the leaders in the testing industry have an education of background in journalism – both Karen Johnson and Jonathan Bach were trained as journalists before they became testers, and Dr. Cem Kaner has a law degree as well as his PhD in Psychology.
What would the Matt Heusser from ten years ago think about today’s software testing landscape?
MH: Well let’s see … ten years ago Extreme Programming was a crazy idea that Kent Beck was trying at Chrysler, “agile” was spelled lower case and was an adjective that meant ‘bendy.’ The software development landscape was full of RUP and patterns and generalizations and abstractions, and testers were counting test cases. James Bach and Cem Kaner were using the term exploratory testing, but it was far from popular. Today, I think testers have more voices to listen to – the standard school, the context-driven school, and the Agile School all understand each other a bit better and have more clear comparisons. Also, test managers have more options, like the crowdsourced approach or perpetual beta.
Overall, there are more options, the profession is taken more seriously, and we are finally starting to evaluate ideas based on consequences and outcomes instead of ideology. It’s a great time to be a tester. I wouldn’t want to go back.
Check back on Monday for part two of our chat with Matt Heusser. We’ll cover topics like what OS and browser HE uses, how mobile app testing differs from web or desktop testing, and whether great testers are born or trained.