A New Year’s Resolution From @uTest

As 2011 nears, it occurs to us that it’s the season for resolutions. So what should our new year’s resolution be?  Lose 320,000 lbs (across our 32,000 person community, of course)?  Give up caffeine (highly unlikely)?  Strive for work-life balance (huh!?!)?

No, those don’t seem quite right for us.  So how about this — in 2011, we resolve to eradicate crappy apps.  Apps that are buggy.  Apps that crash or hang up for no reason.  Apps that aren’t intuitively usable to netizens around the world.  Apps with privacy or security snafus.

We’ve  come a long way in the past year.  And in 2011, we’re going to be even better.  A better app testing partner to our hundreds of recurring software, retail and media customers.  A better source of revenue to our thousands of testers.  And a better place to work for the ~50 people that eat, breathe and sleep uTest.

What’s your new year’s resolution?  Drop us a comment and inspire your fellow readers of the software testing blog.

uTest Crushes Q3 with 300% Revenue Growth Year-over-Year

[WARNING: Slight self promotion today, but we couldn't help but share the exciting news with you!]

Today, we are thrilled to announce a 300 percent increase in year-over-year revenues in the recently-ended Q3. During this time, uTest closed a $13 million C Round of financing (led by Scale Venture Partners) – one of the largest investments ever made in a crowdsourcing company. We also launched cool new features and functionality, such as social sign-in, dynamic reporting and integrations with more bug tracking systems.

In Q3 2010, uTest also:

  • Signed 120+ new customers, including innovative category leaders The Associated Press, The BBC, The Container Store, Urbanspoon, Box.net
  • Acquired 2,500 new software testers, growing the community to 30,000+ professional testers from 168 countries
  • Added Scale Venture Partners Managing Director Sharon Wienbar to the Board Of Directors
  • Conducted the “Clash of the Career Sites” – more than 500 testers from 22 countries discovered nearly 700 bugs in the web and mobile apps of Monster, CareerBuilder, SimplyHired and Indeed; see articles in TheNextWeb and TechTarget for details

And there’s so much more to come! I just want to take a quick moment to thank our fantastic global community of testers who exceed expectations and go above and beyond to meet the testing needs of hundreds of companies each day. Our third quarter results are a testament to the tester community’s commitment to excellence.

Comparing The Various Crowdsourced Testing Companies

As you know by now, we’re shy by nature… not the type to engage in blatant self-promotion. But when others say nice things, well, that’s fair game. Anyway, here’s a recent blog post from a tester comparing all the various communities that are available to testers, including oDesk, Elance, Guru and others.

It’s always gratifying to hear that what we’re doing is so well-received and that we’re accomplishing our mission of creating a business focused on letting companies access professional testers around the world. And here’s what this tester had to say about uTest.

This is my favorite! uTest is only for testers. The projects posted are only for testing. And uTest works in a complete different way than the rest of the sites. There are no biddings, no resumes to be uploaded and no different categories. You have to create a profile with your available machines details (OS, browser and anti-virus) and cell phone model details. The projects are posted on the site and you can see them if the requirement suits your profile. Here profile is your machine details and your ratings. This follows pay per bug model. Pay differs with the type and severity of bug you log. For example, a “High” priority technical bug may be paid $20 and a GUI bug is paid $5. There are also various competitions called Bug Battle which are held quarterly and the winner is selected as highest number of bugs logged or quality bugs logged.

’This is a great site for all those who are looking for testing projects. But, one thing we have to keep in mind here is time and good quality testing. The test cycles get locked as soon as their budget or time expires. So, as soon as the project is active (an auto-generated mail comes to your inbox when a project is active for you), we have to start testing. There are various testing projects that are posted here. For example, web-based applications, security applications, desktop applications and many more. There are also blogs and forums for uTest where you can interact with fellow uTesters and share knowledge.

I will rate uTest 5*s again.

We welcome objective comparisons of the various testing-related communities out there… mostly because we’re so proud of our group of 30,000+ testers.

uTest Closes $13MM Series C Investment

When you’re building a startup, there are good news days and bad news days.  Sometimes the good news comes in the form of a killer ad campaign or winning a new customer or a glowing article in the New York Times.  And if you’re really focused and fortunate, sometimes the good news comes in the form of a $13 million dollar influx of cash from one of the hottest VCs in the Valley, which dramatically increases the valuation of your company.

In the past two years, we’ve had more than our share of good news.  And today, we’re announcing that we’ve closed a $13MM C round.  This round was led by Scale Ventures, and all our earlier investors participated fully — including Longworth Partners, Egan-Managed Capital and Mesco Ltd.

This is one of the largest-ever investments made in a crowdsourcing company, which re-affirms what our customers already know: uTest’s in-the-wild testing helps them launch apps that their users love.  In all, uTest has now raised $20MM across three rounds since late 2007.

Use of Money: So what are we going to do with this pile of money that our wise and wonderful investors have entrusted us with?  Well, we initially considered a number of options, including:

  • Giving free iPads to our top 26,000 testers… forget it guys, it ain’t gonna happen!
  • Purchasing four Superbowl ads @ $3MM per… nah, not our style
  • Running one Bug Battle per week for the next 62 1/2 years… interesting, but doesn’t seem quite right
  • Splurging for fancier office signage than what we have today… it’d make the Boston winter more pleasant for our employees, but meh
  • Two words: uTest blimp… nope, we’re scared of heights.  And, oh, the humanity

Actual Use of Money: In the past 18 months, uTest has seen astounding growth and adoption by customers ranging from startups to enterprises to universities.  We’ve seen massive growth in mobile app testing, as well as in the social, gaming and retail industries.  So while the aforementioned investments sound fun, what we’re really planning to do is dramatically increase our investments in:

  • Expanding our newly launched usability testing & load testing services
  • Moving into new service categories that help companies launch great apps
  • Turning our testing platform & APIs into the industry standard for managing internal & external testing teams
  • Engaging our community of 30,000 professional testers
  • Growing mind share & market share of our in-the-wild testing services

A Word of Thanks: So much has changed since our August 2008 launch. We’ve expanded into mobile app testing (now our fastest growing category); we’ve added load testing and usability testing; we’ve added 20,000+ new testers; we’ve acquired hundreds of customers and run thousands of test cycles.  Well, none of that would have been possible without a few key groups:

  • Our employees who are  fanatical in their belief in our vision and relentless in their execution
  • Our investors who offer sage advice, steady guidance, financial resources and the freedom to learn and grow
  • Our customers who, like us, believe that in-the-wild testing compliments in-the-lab testing to create apps that users love
  • Our testers who are passionate about testing, about uTest, and about helping to improve app quality

A Bonus Gift for Entrepreneurs: This is a tough market for entrepreneurs to raise money and build a business. Obviously, this has been an area of focus for us over the past few months, and we were fortunate to get so much interest from the VC community that we had multiple options to choose from.  So we thought it would be cool and useful to share all that we have learned.

Thus, we’ll be writing an in-depth post on the process of raising VC funding in the current market.  And we’re going to answer any questions that come from entrepreneurs, angels, journalists or others.

So ask away… no holds barred.  What do you want to know about the process of finding, pitching and evaluating VCs?  Fire your questions our way by commenting on this post, emailing us, or dropping us a tweet @uTest.  We’ll gather the questions and hook our top execs up to a lie detector machine and force them to answer!

Happy Debugging Day!

Today Computerworld (CW) has officially declared it national Debugging Day. It’s not usually formally celebrated (although it should be!), but the debugging tradition has been honored for more than 50 years now. From the very first bug to space bugs to horrible PR bugs to end-of-the-world bugs, today is the day to examine the most infamous bugs in history.

“It all began with a log entry from 1947 by Harvard University’s Mark II technical team. The now-classic entry features a moth taped to the page, time-stamped 15:45, with the caption ‘Relay #70 Panel F (moth) in relay’ and the proud boast, ‘First actual case of bug being found’ added by Grace Hopper.”

Since the tale of the moth, bugs have spanned from the benign to wreaking complete havoc. Many of the classic bugs of the past have been detailed in our TWIT (This Week In Testing) posts, but here are the top bugs that truly pay tribute to this day:

While we remember these bugs, today is also a day to celebrate and thank software testers all around the world who discover and prevent these disasters (and the smaller but crucial defects) every day. Thanks!

Survey Says…Software Testers ROCK

I recently came across this article, Personality Traits in Software Engineering, which conducted a research survey assessing the major personality traits of software testers and developers. Turns out — and I’m not at all surprised having met so many testers in our community — software testers rock! Here’s how the scores break down:

Tester Scores
Neuroticism: Low
Extraversion: Medium
Conscientiousness: Medium
Openness To Experience: High
Cognitive Capability: High
Agreeableness: High

According to Anne-Marie Charrett in her blog, Maverick Tester, “On average we [testers] are an agreeable bunch of people, open to experience (see below) with a high cognitive capability. A hearty clap on the back fellow testers, we all knew we were pretty special.”

I couldn’t agree more! So, yes, this is simply a feel-good blog for all those testers out there with a case of the Mondays. Give yourselves a hand. And Happy Monday!

Testing Lessons From a Glass Factory

A number of years ago, I took a tour of a plate glass factory. Plate glass manufacturing is pretty simple: dirt pours in one end of a factory where it’s melted in a huge furnace. The melted dirt is then poured out as a thin sheet which then cools into glass as it rolls along a mile-long conveyor belt. The process is continuous – dirt constantly pours in and glass constantly flows out in a never ending ribbon. At the very end of the factory, away from the furnace, a lonely robot slices the ribbon into panes of glass for things like windows and doors.

Periodically, a technician will take one of those glass panes back to a lab where it is broken up, melted, dissolved with chemicals, and analyzed in fine detail under a microscope. That technician is a tester – one who is testing the production of the glass to make sure it matches quality requirements. His job is very different from that of a software tester, but surprisingly there are many things a software tester can learn from him.

That may sound bizarre because software isn’t manufactured. There is no real “production” in software – every copy of an application should be exactly the same. But production testing is about more than manufacturing. It’s about managing variability – and understanding variability should be incredibly important to software testers.

Continue Reading

Testing the Limits With Cem Kaner – Part I

After almost a year of being told that we “have to interview Kaner” by previous Testing the Limits guests and readers, we exercised our listening skills and sought him out. With us this month to share his unique brand of wit and wisdom is Dr. Cem Kaner – author, lawyer, speaker, professor and one of the most respected minds in the testing world.

In part I of our interview, we ask Cem to share his thoughts on the multi-disciplinary nature of software testing. His response includes thoughts on experimental psychology; law; testing metrics; arrogance in the field of testing and more. Check back for part II and part III in the next two days.


uTest: In your online bio, you say the theme of your career has been to “enhance the overall safety and satisfaction of software”. To do so, you’ve studied (and worked in) areas like psychology, law, programming, testing, technical writing and sales. Explain how working in other disciplines has helped you better understand software. And on that note, what can testers learn from lawyers, writers and salespeople?

Kaner: Let me start this by saying that almost all of the best people I know in testing have significant experience in other fields. It’s common for people to move from testing to programming or writing or marketing and then back, bringing what they’ve learned with them, to test with a richer perspective and with a much more productive vision of where testing can fit within development/marketing/support cycles.

We write software to solve problems that people need solved, to do things that people want done, or to entertain people. To understand a piece of software, I need to understand why it was written, who it was written for, why they should want to use it, and what alternatives might serve them as well or better.

So I don’t see the “other disciplines” as “other.” They all contribute to this understanding in important ways.

You asked specifically about how multiple approaches fit into my own work and understanding. That’s a more personal story…

I came into software development with a doctorate in experimental psychology. I did a lot of programming (and writing about code) as a student and was deeply interested in what made products learnable and usable. A specific interest was what made a person more or less likely to make a user error. I wrote several data entry programs for large sets of scientific data. It was remarkable how much my design choices influenced the types of errors people made. What people call “user errors” are at least as much a feature of the program’s design as they are of the people who make the errors.

When people make a repeated error using my code, instead of asking why these people are idiots, I learned to ask what’s wrong with my software that causes the nice people to look like idiots.

Let me generalize this—the quality of a program extends far beyond its functionality. There is a huge gap between “works right” and “works well.” From my viewpoint, design choices that needlessly reduce the value of the program are defects, every bit as much as coding errors.

Continue Reading

How Many Bars Do You *Really* Have?

So maybe it wasn’t AT&T’s fault after all.

Apple recently revealed that there is a fundamental flaw in their method for calculating how many signal bars to display.  And we have the iPhone 4 (and its “learn to hold your phone the right way” fiasco) to thank for bringing this software snafu to light.

CNN Money shares the following details from Apple:

“Upon investigation, we were stunned to find that the formula we use to calculate how many bars of signal strength to display is totally wrong,” Apple wrote in a statement posted on its website. “Our formula, in many instances, mistakenly displays 2 more bars than it should for a given signal strength.”

That means, for example, that iPhones sometimes display four bars when they should be displaying two. Apple said users reporting a significant drop in bars when they hold their iPhone 4 are probably in an area of “very weak signal strength” but were unaware of that because the phone displayed four to five bars.

“Their big drop in bars is because their high bars were never real in the first place,” the company said.

Perhaps most surprising, Apple disclosed that the problem is not confined to the iPhone 4.  The faulty formula has been present in every iPhone model since the 2007 original.  Questions remain about whether the issue is strictly software-related, or if it also involved hardware problems.  However, Apple has said it will release a free software update in the next several weeks to fix the glitch. It will use a new formula recommended by AT&T.