Testing the Limits with Cem Kaner, Author of The Domain Testing Workbook

Dr. Cem KanerThis month, we revisit Cem Kaner. Cem recently published  The Domain Testing Workbook and is working on a collection of other workbooks and projects in addition to teaching several courses at Florida Tech.

In today’s interview, Cem explains his new workbook, discusses why it’s important for experienced testers to keep studying and improving, tells us what’s wrong with the testing culture today and hints at maybe having a solution for the QA credentials battle.

*****

uTest: You’ve written quite a few books on software testing and you had a new book – The Domain Testing Workbook – come out in the past few months. Why do you enjoy writing these books and who are you trying to help?

Cem Kaner: My overall goal is to improve the state of the practice in software testing. How can we improve what working testers actually DO so that they are more effective and happier in their work?

The Domain Testing Workbook is the first of a series that focus on individual test-design techniques. Our intent is to help a tester who has some experience develop their skills so that they can apply the technique competently.

What’s wrong with the way domain testing is currently taught?

CK: There’s nothing wrong with the way domain testing is taught. Teachers introduce students to the two basic ideas: (a) subdivide a large set of possible values of a variable into a small number of equivalence classes and sample only one or a few values from each class. This reduces the number of tests to run. (b) When possible, select boundary values as your samples from each class because programs tend to fail more often at boundaries.

In general, students understand these introductions and can explain them to others.

This level of analysis works perfectly when you test Integer-valued variables one at a time. There are lots of Integer-valued variables, and it makes a lot of sense to test every variable on its own, if you can, before you design tests that vary several variables at the same time. So, I think many courses do a fine job of introducing a useful idea to students in a way that helps them use it.

However, there is much more depth to the technique than that. Here are four examples:

  • It is common to look only at the input values and decide pass/fail based on whether the program accepts “good” inputs and rejects “bad” ones. A stronger approach goes past the input filter. For example, enter the largest valid value. The program should accept this. Suppose it does. Now continue testing by considering how the program uses this value. What calculations is it used in? Where is it displayed, stored, or compared to something else? Is this largest-acceptable value actually too large for some of these later uses?
  • There are other types of variables, not just Integers. Different risks apply when you are testing floating-point numbers or strings. Dividing them into equivalence classes is a little trickier.
  • We usually test variables together. Any test of a real use of the program will involve many variables. Even if you leave most of them at their default value, the program considers the values of lots of variables when you ask it to do something meaningful. We can manage the number of combinations to test using techniques like all-pairs. In all-pairs testing, the tester chooses a set of maybe 10 variables to test, then chooses a few values of each variable to test, then uses a tool like ACTS or PICT to create a relatively small set of tests (maybe as few as 30) that will combine these values of these 10 variables in an optimized way. (ACTS and PICT are free tools from the National Institute of Standards and Technology (ACTS) and from Microsoft (PICT). One of the challenges of this type of testing is picking the best values for the individual variables—and that brings us back to domain testing.
  • We often test variables together that are related to each other. How do you choose boundary values when the boundary (such as largest valid value) of one variable depends on the value of the other? This particular issue appears often in university textbooks but there hasn’t been enough practical advice for working testers.

The Domain Testing Workbook goes beyond the perfectly-good introductory presentations that appear in many books and courses. We want to help testers apply the technique to situations that are a little more complex but still commonplace in day-to-day testing.

Continue Reading

Essential Guide to Mobile App Testing

Best Software Testing Quotes of 2013

Testing quotes from 2013It’s that time of year again – recap time! This month’s Testing the Limits will be a tribute to all our interviewees from 2013. Let’s take a minute to thank them again for their time and insights and revisit some of our favorite moments.

Jane Fraser

“One of my early lessons was at Corel, I found importing of graphics slow. And I logged a bug that importing was too slow.  Needless to say it came back – will not fix. I tried sending it back, again saying it was “too slow”. Back it came again.

A day or so later, our VP was giving us a presentation on the product and where we were going. He compared us to our competition several times. So now I knew he cared how we stacked up to others. Back at my desk I fired up the competitors programs, pulled out a stop watch  and 6 different graphics in different formats. I created a table with the 3 programs and the results. One example Competitors: 2sec, .5 secs.  Ours: 19sec.  The bug was addressed. I now tell this story quite a bit. Comparisons can be key, and knowing what is important to your audience (those making the decisions).”

“I really believe if your not trying to improve your processes you be left behind. I’m always looking for new ways to do things, listening to the experiences of others, building a network of peers so I have people to ask advice.  As products and technology change testing must change.”

Fiona Charles

“We’re paid to deliver the information we’ve uncovered about the quality of a system or systems. If a tester chickens out of delivering her information clearly, or fails to get the message across because she delivers it badly, then she risks blowing her credibility and rendering herself ineffective forevermore in that organization.”

Jonathan Kohl (Part I and Part 2)

“We should elevate people above processes, practices and tools, not make them subservient to them.”

“It’s important to develop an awareness for new technology information, and just try things out. Don’t be afraid to say you don’t know something, and to get in over your head and ask for help.”

“A poor technology experience is another type of poor customer service, so there is a direct line there. If you let down your customer or provide them with a poor first experience (which we are finding is increasingly on mobile devices), you can lose them forever. So not only do we need to have great people skills and a good strategy for satisfying and impressing our customers in our human interactions, but in our technology interactions as well.”

Continue Reading

Essential Guide to Mobile App Testing

Testing the Limits with Ben Kelly

Ben KellyBen Kelly has literally tested around the world. His career has taken him to Australia, Japan and the UK and Ben is currently a Software Engineer in Test for eBay. A regular presenter at conferences in the US and Europe, Ben also blogs at TestJutsu (when he has a spare moment).

In this month’s Testing the Limits interview, Ben discusses his testing experiences, his passion for exploratory testing, advice for new testers and his ultimate dream for the testing profession.

*****

uTest: So, how did you become a software tester and what drew you to this field?

Ben Kelly: It should have been obvious to me in my youth. I had a habit of taking stuff apart to see how it worked. That coupled with my tendency toward dark humour and pessimism should have been a clue. I wish I’d known earlier that testing was a possible vocation.

Like many others, I fell into the field accidentally. In my case, I wanted to be a programmer. I was an okay coder, but no company seems to want ‘okay’ out of university. They want propeller hats, pocket protectors, coke-bottle glasses and social ineptitude. A friend suggested testing as a way of bridging into a programming role. I gave it a shot and then discovered that I was much more successful at finding out ways that stuff does what it shouldn’t than making it do what it should. I also enjoyed it a lot more.

You’re a big proponent of exploratory testing. What draws you to that style and why do you think it’s a good approach?

BK: All testing is exploratory to some degree. Even in the case where you have heavily prescribed test steps, there is still room for interpretation. There are often multiple ways of performing a certain action. There are cases where you will notice potentially interesting things that are not completely relevant to what you’re testing. It’s up to you as a skilled tester how you act in those situations. You could choose simply to ignore it and follow your script. You could also be a robot made entirely of meat.

I’m not against test scripts or documentation. I’m against following rules over applying skill. When you’re given the freedom to do your job as a skilled knowledge worker, it’s much easier to avoid regularly abused and frequently meaningless metrics like bug counts and test case completion percentages and instead to focus on finding information that is important to the people you serve.

Since you’re a manual, exploratory tester, what are your feelings on test automation?

BK: Test automation and I are not strangers. I don’t see automation as a dichotomy of ‘automation vs. manual.’ Automation in its many forms serves to augment testing and/or programming to some degree whether it be a throwaway script, a heavy-duty test framework or something else.

Automation should solve a problem. Much like any other software development effort, you want to know what problem you’re solving and which tool is right for the job. Testing is testing. It’s the thought behind the effort that makes it good or bad. Throwing GUI automation at every problem because that’s all you know is ignorant at best. Mandating things like percentage of manual test cases to be automated is stupid. If you’re automating without understanding why, then the tail is wagging the dog. If you’re automating in the hope of doing away with sapient manual testing then you’re doing it wrong.

 You’ve worked for companies literally around the world. Do you see different approaches or attitudes toward testing in different countries?

BK: Not really. I see stereotypes more in terms of industry, company size and individual company culture rather than a country-specific attitude. Behavior is driven by what is rewarded. Companies that value commoditization of testing and protecting the bottom line will get very different testers to those that value skilled knowledge work. That seems to be true no matter what country you’re in.

eBay is one of the top names on the online retail world – not to mention one of the first in the space. How does eBay approach software testing?

Continue Reading

Essential Guide to Mobile App Testing

Testing the Limits with Lisa Crispin

Lisa CrispinLisa Crispin has been a software tester since 1995. In this month’s Testing the Limits, she’ll tell us how she fell into the profession, talk about her experience with waterfall and agile development and share some insights about what really makes testing teams work. Plus, we’ll learn about donkey trust.

Next month, Lisa will be participating in Deep Agile 2013 in Cambridge, MA. Lisa will lead sessions titled “Deliver All the Right Things (and Only Those Things)” and “Story Slicing and Collaborative Analysis with Gherkin.” Other session leaders include Jeff “Cheezy” Morgan, Ellen Gottesdiener, Matt Barcomb and Stephen Vance. Deep Agile 2013 takes place November 23-24, register online.

*****

uTest: What got you into software development and testing in the first place?

Lisa Crispin: I needed a job. During a recession in Texas many years ago, I was laid off from my government job doing research for local governments. I wanted to move to Austin, TX, and while job-hunting there, saw a notice in the University of Texas employment office: “Computer Programmer Trainee, No Experience Necessary.” I had little experience using computers (this was in an era where there was still some use of punch cards) so I fit the bill! The Data Processing department hired for domain knowledge and aptitude. I did well on the test and had an MBA so I got the job.

What drew you to Agile over other schools and methods?

LC: In the ’90s I had worked for an excellent team that did waterfall, but also did many good dev practices we now associate with “agile”: CI, automated regression testing at the unit and GUI levels, whole-team style collaboration throughout the life of the project. This was a database company, and we released every 6 – 12 months, it worked great.

Then I moved to a web start-up. We diligently implemented and followed waterfall, but no matter how disciplined we were, we could not deliver any new features fast enough to keep up with the competition. It was so frustrating! Then some of my teammates moved on to another start-up that decided to use XP. They gave me Kent Beck’s book to read. I was so excited, it was all about quality! And it seemed like it might be the solution to delivering new features fast enough! I never looked back.

What do you say to the people who think Agile is just another trend?

LC: On one level, “Agile” is just another trend. Many companies hear about it as the latest thing, so the CTO commands that now they will be Agile, they send their PMs to ScrumMaster school, have two-week iterations and standups, and call themselves Agile. Of course it doesn’t work.

I like Elisabeth Hendrickson’s definition of agile. To paraphrase her, a team is agile if they deliver business value frequently, at a sustainable pace. The “sustainable pace” captures all the good practices needed in order to actually deliver value in the form of high-quality software that customers want, frequently and consistently without killing yourselves.

I wish we could stop using labels such as “agile,” “lean,” and “kanban,” and start calling it “improving how we deliver software.”

If there’s one thing you could go back and tell yourself at the beginning of your career, what would it be?

LC: Interestingly, at the start of my career, we were very much what would now be called agile. We paired with our customers since we didn’t know about writing requirements. When they were happy with what we did, we released it to production. We didn’t know about testing, or phases. But we did a great job, even on difficult projects such as one of the first online library catalogs.

But I’d go to myself in my second programming job, where I first learned about “waterfall,” and tell myself: It’s not methodologies or tools or programming languages that make projects successful. It’s getting the right people, and allowing them to do their best work.

Continue Reading

Essential Guide to Mobile App Testing

Testing the Limits: ISST Part II

International Society of Software Testing - ISSTIn the second part of the month’s Testing the Limits, Ilari Henrik Aegerter, Iain McCowatt, Johan Jonasson and Henrik Andersson of the International Society for Software Testing discuss the group’s founding members, the motivation behind their “Common Sense Testing” approach and some great advice for new and seasoned testers.

Be sure to read Part I of this great interview!

*****

uTest: Your founding members list is quite impressive. Was it difficult getting all those amazing testers on board and in agreement about the basic philosophy of the group?

Iain: Well, as you can imagine – with a group like that – we got some bloody hard questions! What blew me away though was the response we got, the level of belief that change is needed.

Johan: What brings the founding members together is a belief in the potential of ISST. What I really enjoyed when we started reaching out to them is that so many of them immediately went into “testing mode” and started questioning our motives, mission and plan for the future. We absolutely encourage continuous questioning and transparency as part of our basic philosophy and I think that’s one of the key things that earned us this incredible initial support.

Ilari: All of the above. And it was quite impressive to see how different the questions were. We are very grateful for the trust the founding members are putting into the ISST becoming a successful society.

Henke: ISST is nothing without our members. When we started reaching out to such a prominent group of testers of course we expected the Spanish Inquisition. However those questions quickly turned into optimism and creative thinking as to what shape our members would like the ISST to take. Our founding members are truly excited and energised about ISST.

uTest: The URL for the society is “Common Sense Testing,” explain that concept and why you back it.

Ilari: I am going to steal the words of Iain here because he said it beautifully: “Common sense is unfortunately not too common.” We want to emphasize the importance of skillful thinking and we believe that Common Sense Testing conveys that.

Iain: Yeah, I’m not sure I didn’t borrow that from somewhere myself. I suspect for me, and it’s a little ironic, that it goes back to a conversation I had with Rex Black, hardly noted as an advocate for context driven testing. He told me that, as far he is concerned, context driven testing is just common sense. I don’t disagree, but have seen such a remarkable lack of common sense over the years I feel we could do with more common sense in testing.

Henke: Common sense is something that many shrug their shoulders at, just as Rex did in Iain’s story. It is not valued as a skill or something that needs attention, it is considered to be there anyway. However common sense is just not built in to your brain by magic. It is something that we all need to actively practise. We can agree in discussion that many things are common sense but when looking at our behaviours and decisions find that they are not congruent with this. This is a huge problem in our industry and ISST wants to take common sense from being two shallow words into real and hardcore action.

uTest: Unfortunately, too many companies undervalue QA and testers. How should QA departments go about proving to executives that their work holds a lot of value?

Iain: A lot of testers complain that their stakeholders don’t see the value of their work. Perhaps the first question for a tester to ask is…does it have any value TO MY STAKEHOLDERS? The product of testing is information, and if the information it provides is not relevant to what stakeholders need, then it is of no value.

Ilari: Maybe they should start with not calling themselves QA. What is wrong with the title “Tester?” Instead of coming up with fancy job titles, test departments should seek to provide value with courage and candor. Don’t talk about how important you are. Executives might rather want to hear what testing does for them, what information it gives them, what risks are mitigated.

Continue Reading

Essential Guide to Mobile App Testing