Software Engineers: “Forgive Me Testers, For I Have Sinned”

A few days back, GigaOM posted terrific article on the 7 Sins of Software Development. When you read it, which I strongly suggest, I think you’ll see that testers play a huge role in absolving the various “deadly” sins of software engineers.

If you’re too apathetic to read the article (sloth is a sin, FYI) then check out the excerpts below:

Sloth
Sloth is apathy, not laziness. An apathetic programmer is the arguably the most detrimental, because he has zero interest in quality. On the other hand, a lazy programmer can be a good programmer, because laziness can drive long-term efficiencies. For example, if I’m too lazy to type in my password everywhere, I might create a single sign-on feature. Or, if I’m too lazy to manually deploy software, I will instead write an automatic deployment tool. Laziness and scalability go hand in hand.

Wrath
Although many software engineers seem peaceful, underneath the surface often lurks a passive aggressive personality. Take a look at source code comments to see examples of this hidden hostility. Usually profanity in source code is proportional to technical debt. However, it is vital that your engineers are not milquetoasts. Beware of the programmer who does not ask questions or who will use any text editor willingly. Good programmers have strong opinions, but they also appreciate lively debates.

Envy
Envy can be very dangerous in software development. Envy for other products often leads to feature creep. If someone mentions feature parity, you should ask, “But do we need it?” The ultimate killer feature is simplicity, but simple to use is hard to design. Also, it is easy to lose focus when you are constantly watching what other companies are doing. Imagine building towers out of Legos. Would you rather build one tower at a time or many towers in parallel? The parallel approach only works if the towers are identical. Otherwise, you spend too much time context switching. Agility is not the same as half-baked. And doing one thing well is still underappreciated.

Continue Reading

Essential Guide to Mobile App Testing

8 Tips For Becoming a Dedicated Tester

Become a top software testerOur old friend James Bach recently fielded a question on his blog from a new tester seeking advice on what her daily routine should include so that she can grow in her new field. James seems impressed by the new tester’s discipline (she did willingly ask for daily testing “homework” after all) and dedication to the craft. He outlined five tasks he believes every tester should practice on a daily basis, here’s a quick summary of his tips:

Write every day
Whenever I find myself with a few moments, I make notes of my thoughts about testing and technical life.

Watch yourself think every day
While you are working, notice how you think. Notice where your ideas come from. Try to trace your thoughts.

Question something about how you work every day
Testers question things, of course. That’s what testing is. But too few testers questions how they work. Too few testers question why testing is the way it is.

Explain testing every day
Even if no one makes you explain your methodology, you can explain it to yourself.

I like these tips because they aren’t the typical recommendations you run across, like “test whenever you can,” “read an array of testing books” and “be open-minded when it comes to techniques.” Those are great tips too, just nothing special. Of course, James didn’t just give one sentence explanations for each of his pointers, so take a few minutes and read his complete blog post to get the full impact of these smart tips.

And as a little extra, here are a couple more tips James’ readers left in the comments section.

Continue Reading

Essential Guide to Mobile App Testing

5 Myths of Software Testing

As I scan the software testing stories of the day, I’m amazed at the frequency of certain misconceptions. While there are too many to list, I wanted to share five of the most common testing myths (in my brief experience). The first three I find to be prevalent in mainstream news articles, while the other two are more common within the tech industry in general.

Take a look and see if you agree with me.

Myth 1. Testing is boring: It’s been said that “Testing is like sex. If it’s not fun, then you’re doing it wrong.” The myth of testing as a monotonous, boring activity is seen frequently in mainstream media articles, which regard testers as the assembly line workers of the software business. In reality, testing presents new and exciting challenges every day. Here’s a nice quote from Michael Bolton that pretty much sums it up:

“Testing is something that we do with the motivation of finding new information.  Testing is a process of exploration, discovery, investigation, and learning.  When we configure, operate, and observe a product with the intention of evaluating it, or with the intention of recognizing a problem that we hadn’t anticipated, we’re testing.  We’re testing when we’re trying to find out about the extents and limitations of the product and its design, and when we’re largely driven by questions that haven’t been answered or even asked before.”

Myth 2. Testing is easy: It’s often assumed testing cannot be that difficult, since everyday users find bugs all the time. In truth, testing is a very complex craft that’s not suited for your average Joe. Here’s Google’s Patrick Copeland on the qualities of a great tester:

Continue Reading

Essential Guide to Mobile App Testing

Testing Roundtable: What’s the Biggest Weakness in the Way Companies Test?

This month, in place of our standard Testing the Limits interview, we decided to hit up a few of our past guests for a “testing roundtable” discussion. The topic: What is the biggest weakness in the way companies test software? Below are some extremely insightful answers from testing experts Michael Bolton, James Bach, Noah Sussman, Dan Bartow, Rex Black, Jim Sivak and Cem Kaner. Enjoy!

*********************

Michael Bolton, Principal at DevelopSense:

So far as I can tell, most companies treat software development as implementation of highly idealized business processes, and they treat testing as an exercise in showing that the software models those processes in a way that’s technically correct. At the same time, companies treat the people who use the software as an abstraction. The consequence is that we’re creating software that delays and frustrates the people who use it or are affected by it. When testing is focused almost entirely on checking the functions in the software, we miss enormous opportunities to learn about the real problems that people encounter as they go about their business. Why are testers so often isolated from actual end-users?

Today I was traveling through the airport. When I checked in using the online service, I had accidentally noted that I’d be checking two bags, but I only brought one with me. In addition, my flight was cancelled, and I had to be put on a later flight. The customer service representative could get me onto that flight, but she had serious trouble in printing a boarding pass associated with only one bag; apparently there was a warning message that couldn’t be dismissed, such that her choices were to accept either three bags or none at all. It took fifteen minutes and two other representatives to figure out how to work around the problem. What’s worse is that the woman who was trying to help me apologized for not being able to figure it out, as if it were her responsibility. Software development organizations have managed to convince our customers that they’re responsible for bugs and unforgiving and unhelpful designs.

The success of a software product is only partly based on how it handles the happy path. That’s relatively easy to develop, and it’s relatively easy to check. Real testing, to me, should be based on investigating how the software allows people to deal with what we call “exceptions” or “corner cases”. That’s what we call them, but if we bothered to look, we’d find out that they were a lot more common than we realize; routine, even. Part of my vision of testing is to include a new discipline in which we do significant field research and participant observation. Instead of occasionally inviting customers to the lab (never mind sitting in the lab all by ourselves), we testers—and our organizations—could learn a lot through direct interaction with people who use the software every day; by close collaboration with technical support; and by testing rich and complex scenarios that are a lot closer to real life than simplified, idealized use cases.

*********************

James Bach, Author and Consultant, Satisfice:

There is a cluster of issues that each might qualify as the biggest weakness. I’ll pick one of those issues: chronic lack of skill, coupled with the chronic lack of any system for acquiring skill.

Pretty good testing is easy to do (that’s partly why some people like to say “testing is dead”– they think testing isn’t needed as a special focus because they note that anyone can find at least some bugs some of the time).

Excellent testing is quite *hard* to do.

Yet as I travel all over the world, teaching testing and consulting in testing organizations, I see the same pattern almost *everywhere*: testing groups who have but a vague, wispy idea what they are trying to do; experienced testers who barely read about and don’t systematically practice their craft beyond the minimum needed to keep their employers from firing them; testers whose practice is dominated by irrational and ignorant demands of their management, because those testers have done nothing to develop their own credibility; programmers who think their automated checks will save them from disaster in the field.

How does one learn to test? You can’t get an undergraduate degree in testing. I know of two people who have a PhD in testing, one of whom I admire (Meeta Prakash), the other one is, in my view, an active danger to himself and the craft. I personally know, by name, about 150 testers who are systematically and diligently improving their skills. There are probably another several hundred I’ve met over the years and lost touch with. About three thousand people regularly read my blog, so maybe there are a lot of lurkers. A relative handful of the people I know are part of a program of study/mentoring that is sanctioned by their employers. I know of two large companies that are attempting to systematically implement the Rapid Testing methodology, which is organized around skill development, rather than memorizing vocabulary words and templates. Most testers are doing it independently, however, or even in defiance of their employers.

Yes, there is TMap, TPI, ISTQB, ISEB, and many proprietary testing methodologies out there. I see them as crystallized blobs of uncritical folklore; confused thinking about testing frozen in place like fossilized tree sap. These models and procedures have been created by consultants and consulting companies to justify themselves. They neither promote skill or require skill. They promote what I call “ceremonial software testing” rather than systematic critical thinking about complex technology.

Just about the best thing a tester can do to begin to develop testing skill in a big way is not to read or study any test methodology. Ignore vocabulary words. Toss aside templates. No, what that tester should do is read Introduction to General Systems Thinking, by Gerald M. Weinberg. Read it all the way through. Read it, young tester, and feel your mind get blown. Read it, and meditate on its messages, and do the exercises it recommends, and you will find yourself on a new path to testing excellence.

*********************

Noah Sussman, Technical Lead, Etsy:

A surprising number of organizations seem to dramatically underestimate the costs of software testing.

Testability is a feature and tests are a second feature. Having tests depends on the testability of an application. Thus, “testing” entails the implementation and maintenance of two separate but dependent application features. It makes sense then that testing should be difficult and expensive. Yet many enterprise testing efforts do not seem to take into account the fact that testing an application incurs the cost of adding two new, non-trivial features to that application.

There also seems to be a widespread misconception that testing somehow makes application development easier. In fact the opposite is true.

If I may mangle Kernighan: testing is much more difficult than writing the code in the first place. To implement testability and then write tests, one needs first to understand the architecture of the application under test. But testing also requires doing hard things — like input partitioning and path reduction — that are beyond the scope of the application. The reality is that to get good tests, you’re going to have to ask some of your best people to work on the problem (instead of having them work on user-facing application features). Yet many organizations seem not yet to have recognized this.

*********************

Continue Reading

Essential Guide to Mobile App Testing

Testing the Limits With Anne-Marie Charrett – Part I

Testing the Limits with Anne-Marie CharrettTo kick off another amzing year of Testing the Limits we reached out to Anne-Marie Charrett, an independent tester who has worked for the likes of Mercury Interactive, IBM (twice) and Nortel – just to name a few. She also arranges for speakers to visit Ireland as part of Softtest Ireland and blogs about her testing experience and offers coaching at mavericktester.com

In part I of this month’s interview, we learn what motivates Anne-Marie to coach via Skype, what’s caught her interest lately, how her book with James Bach is coming and what the biggest mis-conception about testing is. Come back tomorrow for part II.

uTest: In terms of writing, speaking and researching, you are one of the most active testers in the business. So we’ll start by asking you this: What hot topics within testing have captured your interest recently?

AMC: 2012 has kicked off with a flurry of activity. Key topics appear to be, How we learn, Rapid Test Management and more recently James Bach has been looking Exploratory Test Documentation.

It goes like this. Typically we write tests and charters as artifacts for other people as evidence of work performed. But writing is a lot more powerful than that, it has the ability to assist in design (think brainstorming in mind maps). Exploratory Test Documentation is about changing the purpose of writing from an end product to a by product.

I also like the way new conferences and peer workshops are happening at a grass roots level, for example Lets Test in Stockholm. These are not necessarily big conferences, but ones that offer value to testers and that encourage participation. I hope that this will be the conference circuit of the future!

uTest: You’ve made quite a name for yourself as a testing coach; offering advice to testers free of charge via Skype. In your experience, what areas require the most coaching on your part? In other words, what does a typical tester coaching session cover?

AMC: Often testers come looking for coaching in a particular skill (e.g Test Automation), but many fail to understand basic testing concepts such as: “What is testing?” and “How do you determine bugs?”

Understanding testing is key to improving your testing skill.  After all, if you don’t understand something, how can you improve it?

Software delivery typically doesn’t allow for this type of introspection. Our jobs demand we focus on delivery, often to the detriment of how well we are doing our testing.

Coaching is the breathing space that all testers need to learn and grow.

In coaching I encourage testers to work through tasks to acquire skill. I’m there to guide and help them, but they need to work out the answers. That way, their learning experience is deeper and more meaningful and empowering.

Continue Reading

Essential Guide to Mobile App Testing