Last week, uTest hosted a webinar on exploratory software testing with James Whittaker. We received a fantastic response from the 250+ attendees, and we couldn’t get to all the questions before our time was up. Luckily, James was kind enough to sift through a stack of the remaining questions and provide answers to several that jumped out at him.
Also, remember that we’re handpicking five webinar attendees to receive a free copy of his new book on exploratory testing, signed by James.
Q: When making a tour specific to your own application domain, doesn’t that become what is usually called a test scenario? How do you see tours being different from scenarios?
A: Great question and I cover this in my book. Chapter 4 deals with “Tours” and chapter 5 deals with “Scenarios.” In a nutshell, I see scenarios as more prescriptive than tours. Tours are meant as general guidance and scenarios, at least in my mind, are more specific. A tour specifies goals and approach to coming up with test cases, a scenario actually provides an outline of the test cases. Tours leave much more of the actual test case to be constructed as you test. A scenario, in other words, has less variation.
But don’t get caught up in semantics. It’s a continuum of detail really. At one end of the spectrum are fully detailed test cases, at the other is ad hoc testing. Scripts, scenarios, tours, patterns … they all fall somewhere in between.
Q: What is the difference between exploratory analysis and exploratory execution?
A: I don’t like introducing new terms – testing has too many of them already – so I will talk about the concepts here rather than reinforce these exact names. The thought processes that go into exploratory testing are generally considered something that you do while you are executing test cases but this only works with manual testing. In Google’s case, we do substantial test automation and once the test code starts running your chances of introducing exploration is pretty much gone. The automation will execute your test case with brute force and little flexibility.
With automation, you have to do your exploratory thinking up front and this is where we came up with the idea of exploratory analysis. Simply put, the idea is to run the Tours in your head and let your thinking inspire your automation. The best example we have within Google is Rajat Dewan’s example he presented at STAR East and explained on the Google Testing Blog.
Q: Do you have some tips on how to keep testing fresh and new when there is release after release? (To avoid people getting bored and always testing the same things and missing new issues.)
A: In fact, I do. I think this very problem is what I was trying to tackle with the tours. But your question gives me the opportunity to clarify this intent. Static test cases might be fun to come up with and fun the first couple of times you run them but running them build after build and release after release not only gets dull, it also introduces the pesticide paradox. The reality of the situation is that test cases, as a specific physical entity are too low level. They specify a precise sequence of user actions. Tours are a higher level concept and specify purpose and intent and remain flexible on specific input sequences. In this manner a single tour represents any number of test cases.
Now the secret is finding the balance. Some test cases are really important as they once found a bug or they represent an important user-initiated scenario. We want to run these no matter how bored we get. But beyond that, tours allow us more flexibility to increase coverage around the specific test cases and supply the variation that will keep our heads from exploding in boredom.
Q: Can you offer any advice for a developer to be a better partner in the testing process?
A: Indeed. But I want to point out that your questions is asking for advice to devs, not about what test can do to help this partnership (which is the harder answer, so I thank you for that).
I manage a dozen or so projects from cloud to client to back end data center stuff. Some of these have great developer participation and some less so. The devs who are great partners are very involved in testing. They review and provide feedback on our test plans and designs. They become concerned fairly often about whether we are doing a good enough job in test (I mistrust anyone who trusts me and my team too much). They try to steer testers to areas of the product not covered by dev-penned unit tests. They fret more over us finding very few bugs than when we find a lot of bugs (think about that one a moment). They show great interest in what our automation is doing and like to suggest new manual test cases. When they find a bug, they take the time to show it to us instead of just checking in a fix. They invite us to give presentations during all hands and engineering reviews and they take the time to share credit with us when the team succeeds.
I like this question. Maybe I’ll keep thinking about it and make my answer into a paper.
Q: Have you found that your tours work well or help in cases where requirements are sporadic, vaguely defined or non-existent.
A: Having never worked on any other type of project, I can say with some confidence that, yes, they work quite well.
Sincere thanks to James for a great presentation, and to all the attendees for some excellent questions. We always enjoy seeing discussions about testing be elevated to a strategic level, and so much passion and interest in the subject. Rest assured, we’ll be scheduling more webinars in the coming year. In the meantime, you can find a library of free resources about software testing, including eBooks, whitepapers and recorded webinars. Have other questions for James? Have suggested topics for future uTest webinars? Drop us a comment and let your voice be heard!