Building A Testing Team — Do’s & Don’ts

You’ve got the next big idea for a killer web, desktop or mobile app.  It’s gonna change the world.

So what’s on your to-do list?  Well, you need a slick GUI designer to make it look hot, and top-shelf product team to get the features & UX right.  Oh, and you need some ace developers to make your app come to life.  Good, now you’re done, right?  I mean, yeah, you need to test it, but you can do that yourself… or have your developers do it… or maybe the intern… or your beta users.

Think again, says Rex Black over at eWeek.  Despite what you might think, there IS a right way (and a wrong way) to build the testing team you’ll need to launch a high-quality app:

Continue Reading

Apple’s Tablet On The Launching Pad — T Minus 4, 3, 2…

I think I read somewhere that Apple may be announcing something on Wednesday. </sarcasm>

If you’ve been near any media source in the past few weeks, you’ve probably seen the build-up of Apple’s upcoming announcement, which is widely expected to be the launch of their new tablet device.  To watch the drama unfold, check out’s complete coverage.

Does anyone have predictions about size, feature set, price point, et al?  Share your thoughts.  Being a software testing shop, we’re particularly interested in what types of apps that will be built for this new category-defining device.  Will there be an entirely new class of apps (and thus, more Apple-related testing)?  Will it work with iPhone apps?  Is it purely a web device?

UPDATE:  Ok, so now that we know more about the iPad (check out Mashable’s iPad coverage… or TechCrunch’s… or AlleyInsider’s), I’m curious to hear what you think — Worth the wait?  Overhyped?  Revolutionary?  Meh?  Weigh in and tell us your take.

A Dissenting Opinion On Testing’s “To Cert Or Not To Cert” Debate

Earlier this week, we published our three-part interview with Michael Bolton.  This was the latest installment in our monthly Testing The Limits series, in which we sit down with luminaries from the worlds of testing, development, crowdsourcing or startup life.  As part of this discussion, we asked Michael for his take on the issue of testing certifications (as we’ve done with Matt Heusser and James Bach in previous months).

In response to what she felt was “cert-bashing,” Charity Stoner of ProtoTest has written a post defending test certifications.  Since we always encourage civil discourse and open-minded debate — and since the purpose of  the Testing The Limits series is to offer up different perspectives from around the world of software — I wanted to shine a light on this post.

What do you think about test certifications?  Do they provide testers with a toolkit that complements their experience and adds real value?  Are they a marketing mechanism that limits what it means to be a professional software tester?  Or is it somewhere in the middle?  I’d love to hear your thoughts.

Testing the Limits with Michael Bolton – Part III

In the third and final part of the Michael Bolton trilogy, we cover advice for new testers, his hypothetical banishment from Software Land, the blogs he reads and more. Did you miss our earlier interviews? Here’s Part I and Part II.

uTest: Hypothetical: You’ve been banished from testing – nay, ALL software-related activities – for the rest of your days. What will you to earn a living?  What hobbies would you pick up to fill the intellectual void?

MB: Who knows?  For fun, I’d keep playing mandolin, probably. Teach, maybe. Write. I’ve worked in theatre stage management, been a book-keeper, tended bar, worked in a comedy club. In high school I worked in mail rooms during the summer. Whatever I’ve picked up in life, it was because something needed to be done and I was there to do it.  If it didn’t seem like much at first, I started to learn about it quickly. When you invest a little bit of effort to figure out your job, you learn how to makes it faster and better and more interesting. It turns into this great feedback loop. Any job can be more fun when you set out to master it.

uTest: Tell our testing community something about you that your most avid readers don’t know.

MB: While walking through the woods on an island near Vancouver recently, I found myself being quiet and brief, which I like from time to time. Practically nobody knows that.

Lots of people probably don’t know how much I’m eager to help people out. All of my work—courses, articles, conference presentations, this interview—comes with lifetime free technical support. Have a question? Just ask. I might not answer right away—supporting the family with paying work takes precedence over supporting the community—but I’ve never knowingly turned anybody down, so if I don’t answer right away, be persistent. James Bach makes the same offer, by the way. We’ve found that it’s a great way not only to help people, but also to explore problems and come up with solutions and learn things that can help our clients.

uTest: If you were talking to a newbie tester, what advice would you give him or her to set their professional journey off on the right foot?  How about for a 10-year veteran tester?

Continue Reading

All Circuits Are Currently Busy — A Look Back 20 Years After AT&T Network Crash

Bug-iversary Alert! Tomorrow is the 20-year anniversary of the “crash” of the AT&T Long Distance Network. On January 15, 1990 faulty software was installed on the AT&T Electronic Switching System (Number 4 ESS): a one-line bug incapacitated the entire system, disabling switches throughout half the network.

Known as one of the most serious telecom bugs in history, more than 75 million calls were not connected during 9 hours, an estimated $60 million loss.

Dennis Burke of California Polytechnic said it best: “The Jan. 1990 incident showed how bugs in self-healing software can bring down healthy systems, and the difficulty of detecting obscure load- and time-dependent defects in software.”

Speaking of “load defects,” AT&T — after signing up to be exclusive U.S. provider of iPhone service — has recently come under fire for the quality of its network coverage. Businessweek‘s top headlines read:

In light of this bug-iversary, I can’t help but wonder if more testing should have been done before AT&T took on the massive data demands of modern 3G smartphones? What do you think?

Announcing The 2009 “uTester of the Year” Awards

Today, we announced the results of our 2009 uTester of the Year Awards. Our community is full of professional testers, which made the judging incredibly tough (I can’t believe how much the bar has been raised for testers over the course of 2009). This awards program, however, was designed to recognize those few testers whose testing skills, attention to detail and consistently excellent performance stood out.

The winners were selected by our community management team and project managers, and were based upon testers’ performance across several hundred test cycles for web, desktop and mobile applications.

Brian Rock from Austin, Texas was named the overall uTester of the Year.  Brian joined uTest early in 2009 and brings 10+ years of software engineering experience to our community. Over the course of the year, Brian earned MVT (Most Valuable Tester) awards on multiple test cycles and also wrote a popular uTest guest blog post, “Software Testers: The Eyes of the Battlefield.”  He consistently reports excellent bugs, communicates with customers extremely well, and is very engaged in uTest projects.  Brian had this to say about his experience with uTest:

“Working with uTest challenges me to learn new applications and to solve new testing problems on different products every week,” said Brian Rock. “This keeps things fresh and exciting, and opens my eyes to see systems holistically and keep my defect localization skills sharp. This is what I enjoy most about working with uTest, and I am honored to be among this elite group of testers.”

The complete list of winners of is available after the jump:
Continue Reading

Testing The Limits — 2009’s Top Posts

Testing The LimitsAfter we re-launched our brand in May, we decided that the uTest blog needed to be more than just uTest employees talking about uTest events, uTest awards and the uTest community (see how repetitive that gets?).

Writing witty, thought-provoking content is really hard.  And we’re pretty lazy, but fortunately we know some extremely smart & funny people.  So we invented the Testing The Limits series, in which we interview leaders from the worlds of testing, software, entrepreneurship and crowdsourcing.

We’re immensely grateful to these talented, busy people, and we have much more planned for the Testing The Limits series in 2010.  But before we flip the calendar, these posts from this year are worth another look:

June: James Whittaker – Author, Professor and Testing Evangelist at Google

July: Rosie Sherry — Founder of the UK-based Software Testing Club

August: Andrew Muns — President of Software Test & Performance

September: Jack Margo — SVP of Internet Operations of Developer Shed

October: Jon Winsor — Author, Crowdsourcing Expert, and Founder of Victors & Spoils

November: Matt Heusser — Software Testing Author, Professor and Testing Manager

December: James Bach — Software Testing Author, Teacher and Speaker

We have some great guests and ideas lined up for 2010, including software execs, QA thought leaders, and famous journalists & authors.  As always, the goal of Testing The Limits will be to inform, to entertain, and above all else, to help our readers get to know these thought leaders who are worth following and listening to.

Have a suggestion for a future Testing The Limits guest?  Drop us a note or tell us in the comments section.

Happy Holidays From uTest

98% of the time, our blog is chocked full of software testing, QA, mobile apps, Agile testing or other startup-related topics.  Today, however, just a quick note of warm holiday wishes.  All of us in the uTest family wish you and yours a peaceful, joyous holiday season!


Our Guest Blogger Series: 2009 Year in Review

As a way to extract the collective wisdom of the uTest community, we decided to experiment with a Guest Blogger program beginning in April. To say that it’s been a success would be an understatement, but we’ll say it anyway (the number of page views don’t lie!). Having covered a wide range of topics – including mobile app testing, tester overconfidence, security testing and more – the series has become a big hit within the community — and a great way for testers to get published in front of a large audience.

Here are a some of the highlights from our 2009 guest blogger program.  Stay tuned for an even bigger 2010!

Who is the User? – by Lucia Maldonado:  In what ways is software similar to architecture? And how can this help steer testers in the right direction? In this post, Lucia Maldonado takes an in-depth look at user accessibility standards, and offers a number of essential tips for testers in this field.

Security Testing Tips (from a Bug Battle Winner) – by Bernard Shai Lelchuck:  When it comes to security testing, few can match the expertise of Bernard Shai Lelchuck – one of uTest’s first (and finest) QA professionals. In this post, Bernard covers the basics methods of security testing, including tips for  information gathering, logical attacks and injection attacks. Oh, and here’s Part II.

Respect the Defect: Advice That Will Change the Perception of  Testing – by Joseph Ours:  Testers need to reconsider they way they report bugs – this was the position taken by Joseph Ours in his first (and hopefully not last) uTest blog post. Challenging testers to demonstrate their value by writing more clearly about the bugs they uncover (among other tactics), Joesph has sparked an interesting debate among our community. Visit the comments section to see for yourself.

Step Away from the Simulator: Putting Mobile Applications Into a Tester’s Hands – by Brad Sellick:  What makes mobile testing different from conventional software testing? For one, the simulators and emulators are far less reliable. In this post, uTester Brad Sellick – a self-made expert on mobile app testing and development – explains the dangers of relying on these tools while performing mobile app testing.

What You Need to Know About Writing Effective Test Cases – by Valerie Dale:  Despite all evidence to the contrary, test case design is often seen as work with no real value – a remedial task with no significant ROI. One would think that with the added pressures to launch a quality product on schedule, test case design and planning would be a top priority. It’s not. At best, there is minimal attention paid to the practice. At worst, it’s non-existent. In this post, Valerie Dale makes a great defense of  this beleaguered practice.

Your Overconfidence is Your Weakness: Lessons from a “Crash Specialist” – by Pradeep Soundararajan:  In our most-popular guest post to date, noted blogger Pradeep Soundararajan explains why finding lots and lots of bugs isn’t necessarily a good thing. Reliving his days as a “crash specialist” Pradeep examines how a tester’s ego can get in the way of their objective. His advice is as funny as it is useful. Simply put: a must read.

Software Testers: The “Eyes of the Battlefield” – by Brian Rock:  Our testers come from all sorts of backgrounds, including the armed forces. Brian Rock – a former Sgt. for Combat Arms Forward Recon Team in the U.S Army – is a great example. In this post, Brian makes analogizes testers with cavalry scouts. That is, they are the “eyes of the battlefield.”  Advocating exploratory software testing (especially for those in the uTest community) this post will make you rethink the role of testers.

You’re a Professional Mobile Tester (you just don’t know it yet) – by Bernard Shai Lelchuck:  As the title would imply, this post makes the case that anyone with a mobile phone and an inquisitive mind can become a successful mobile tester. It worked for Bernard Shai Lelchuck! Here Beranrd explains the rise in mobile applications, how he himself broke into the field and some basic tips for those who would like to get started in this growing (and highly lucrative) field.

Question the Connection: Tips for Diagnosing User Login Failures – by Sherry Chukpa:  Forget the sweeping generalizations about software testing “best practices.” This post by uTester Sherry Chupka gets right to the point on a very specific issue: user login failures. If you’ve ever been pitted against this problem in the testing lab, Sherry feels your pains, and has some invaluable advice for you as you move forward.

It’s been a great year, with some terrific insights into the world of testing, but our Guest Blogger program is just getting started. So if you have an opinion to express, a tip to share or a bone to pick, we’re always eager to share the thoughts of our tester community. Email us your ideas at

Exploratory Software Testing: A Follow-Up Q&A with James Whittaker

James WhittakerLast week, uTest hosted a webinar on exploratory software testing with James Whittaker.  We received a fantastic response from the 250+ attendees, and we couldn’t get to all the questions before our time was up.  Luckily, James was kind enough to sift through a stack of the remaining questions and provide answers to several that jumped out at him.

Also, remember that we’re handpicking five webinar attendees to receive a free copy of his new book on exploratory testing, signed by James.

Q: When making a tour specific to your own application domain, doesn’t that become what is usually called a test scenario? How do you see tours being different from scenarios?

A: Great question and I cover this in my book. Chapter 4 deals with “Tours” and chapter 5 deals with “Scenarios.” In a nutshell, I see scenarios as more prescriptive than tours. Tours are meant as general guidance and scenarios, at least in my mind, are more specific. A tour specifies goals and approach to coming up with test cases, a scenario actually provides an outline of the test cases. Tours leave much more of the actual test case to be constructed as you test. A scenario, in other words, has less variation.

But don’t get caught up in semantics. It’s a continuum of detail really. At one end of the spectrum are fully detailed test cases, at the other is ad hoc testing. Scripts, scenarios, tours, patterns … they all fall somewhere in between.

Q: What is the difference between exploratory analysis and exploratory execution?

A: I don’t like introducing new terms – testing has too many of them already – so I will talk about the concepts here rather than reinforce these exact names. The thought processes that go into exploratory testing are generally considered something that you do while you are executing test cases but this only works with manual testing. In Google’s case, we do substantial test automation and once the test code starts running your chances of introducing exploration is pretty much gone. The automation will execute your test case with brute force and little flexibility.

With automation, you have to do your exploratory thinking up front and this is where we came up with the idea of exploratory analysis. Simply put, the idea is to run the Tours in your head and let your thinking inspire your automation. The best example we have within Google is Rajat Dewan’s example he presented at STAR East and explained on the Google Testing Blog.

Q: Do you have some tips on how to keep testing fresh and new when there is release after release? (To avoid people getting bored and always testing the same things and missing new issues.)

A: In fact, I do. I think this very problem is what I was trying to tackle with the tours. But your question gives me the opportunity to clarify this intent. Static test cases might be fun to come up with and fun the first couple of times you run them but running them build after build and release after release not only gets dull, it also introduces the pesticide paradox. The reality of the situation is that test cases, as a specific physical entity are too low level. They specify a precise sequence of user actions. Tours are a higher level concept and specify purpose and intent and remain flexible on specific input sequences. In this manner a single tour represents any number of test cases.

Now the secret is finding the balance. Some test cases are really important as they once found a bug or they represent an important user-initiated scenario. We want to run these no matter how bored we get. But beyond that, tours allow us more flexibility to increase coverage around the specific test cases and supply the variation that will keep our heads from exploding in boredom.

Q: Can you offer any advice for a developer to be a better partner in the testing process?

A: Indeed. But I want to point out that your questions is asking for advice to devs, not about what test can do to help this partnership (which is the harder answer, so I thank you for that).

I manage a dozen or so projects from cloud to client to back end data center stuff. Some of these have great developer participation and some less so. The devs who are great partners are very involved in testing. They review and provide feedback on our test plans and designs. They become concerned fairly often about whether we are doing a good enough job in test (I mistrust anyone who trusts me and my team too much). They try to steer testers to areas of the product not covered by dev-penned unit tests. They fret more over us finding very few bugs than when we find a lot of bugs (think about that one a moment). They show great interest in what our automation is doing and like to suggest new manual test cases. When they find a bug, they take the time to show it to us instead of just checking in a fix. They invite us to give presentations during all hands and engineering reviews and they take the time to share credit with us when the team succeeds.

I like this question. Maybe I’ll keep thinking about it and make my answer into a paper.

Q: Have you found that your tours work well or help in cases where requirements are sporadic, vaguely defined or non-existent.

A: Having never worked on any other type of project, I can say with some confidence that, yes, they work quite well.

Sincere thanks to James for a great presentation, and to all the attendees for some excellent questions.  We always enjoy seeing discussions about testing be elevated to a strategic level, and so much passion and interest in the subject.  Rest assured, we’ll be scheduling more webinars in the coming year.  In the meantime, you can find a library of free resources about software testing, including eBooks, whitepapers and recorded webinars.  Have other questions for James?  Have suggested topics for future uTest webinars?  Drop us a comment and let your voice be heard!