Learn Security Testing Basics at uTest University

Data breaches, hacking, and other security leaks have been in the news for months now. Earlier this year, the Heartbleed bug affected the data security at big names like Google, Yahoo, Instagram, Pinterest, and Netflix. Organizations of all sizes from coast to coast are constantly dealing with security threats and breaches. New York suffered 900 data breaches last year, according to a report from the State Attorney General. In California, an insurance company inadvertently exposed the social security numbers of 18,000 doctors on a public web site.security-lock

It seems that the trend of big data breaches making the news is not stopping. This PC World article points out the 5 biggest data breaches of 2014 so far and the list includes recognizable names like eBay, Michaels Stores, and the Montana Department of Public Health. All of this media attention puts the security industry – and security testing – in the spotlight.

You can get up to speed on security testing using our course track, which includes:

Continue Reading

Throwback Thursday: Hacking the Mainframe

If there is sexy side of software testing, it is likely Security Testing. I know this because it’s the only type of testing (aside from game testing) that Hollywood seems to care about. For some reason, the hackers portrayed in movies always are always trying to access vital intelligence contained within the mainframe.

The truth is that mainframes – which are essentially just large scale computer systems – are actually throwback tech and today most companies don’t use them. According to the Huffington Post, “the manipulation of massive amounts of data, once the hallmark of mainframe computers, can now be done by server farms which easily connect to other systems, cost far less money, and require less training to administer.”

Continue Reading

How Testers Can Help Regain the Trust of Users

trustStop me if you’ve heard this before: Users are becoming increasingly uneasy with the way in which apps collect, store and share their personal information. It’s a story we’ve discussed a lot here on the uTest Blog over the years (and more recently, on the Applause Blog), but it’s a story that isn’t going away anytime soon unfortunately.

Late last week, MEF Global Chairman Andrew Bud penned a thoughtful guest post for VentureBeat on this very topic, where he argued that trust in apps is on a downward trajectory. In his view, it all has to do with personal information.

In many ways, the apps economy runs on personal information. It’s the currency – the lifeblood – and the main reason why apps can succeed with a freemium model. As Bud argues, it’s also the reason why trust is quickly declining. He writes:

What underpins this transactional relationship is consumer trust and it follows that, for the mobile industry, this should be the watchword for how mobile businesses build and retain customers.  The less confidence people have in their mobile device, the less they will use it and the apps on it. That’s bad news for everyone.

Yet for almost as long as apps have been on the market, consumers have been bombarded with stories in the press and across social media platforms that raise privacy concerns about the way apps gather and store and use personal information.  As an industry we have a long way to go.

He backs his opinion with some hard figures from a recent MEF/AVG Technologies study, which found that:

40 percent of consumers cite a lack of trust as the reason they don’t purchase more via their mobile — by far the most significant barrier. And it’s getting worse. In 2012, 35 percent named trust as an obstacle compared to 27 percent in 2011.

Second, 37 percent claim a lack of trust prevents them from using apps once they’ve installed them on their phone. Third, 65 percent of consumers say they are not happy sharing their personal information with an app.

Hard to argue with numbers like that. So what’s to be done? While Bud places a small amount of the burden on users – arguing that they should be more aware of the threats – he places most of it on the industry as whole: marketers, developers, publishers, aggregators, executives and so forth.

And to that I would add software testers.

Continue Reading

4 Security Lessons From the Great Bitcoin Bug

bitcoinThink twice before trusting us with your personal information…said no 21st century business ever. Whether it’s the swipe of a card at a local convenience store, or that social media app you always find yourself on, using software that could potentially compromise your information is the norm, not the exception.

We’d go insane if we worried about every single transaction that could lead to identity theft or a depleted bank account. So instead, we put our trust in the technical leadership of brands to avoid these disasters on our behalf. Most of the time, there’s nothing to worry about. Most of the time.

Mt.Gox, the world’s largest Bitcoin (digital currency) exchange, recently lost track of 740,000 Bitcoins, resulting in a projected $350 million dollar loss after hackers allegedly planted a bug into the system. Here’s the scoop:

“In its announcement on Monday, Mt. Gox said that a bug in the Bitcoin software made it possible for someone to use the Bitcoin network to alter transaction details to make it appear that a Bitcoin transfer had not taken place when, in fact, it had.”

Mt.Gox reportedly handled about 80% of the world digital currency! Trading and withdrawals were halted, and users returned to a blank page on their website, and the “cryptocurrency” industry is now dealing with a major blow to its validity. There are lessons to be learned from this heist into the Bitcoin network, both for software developers and for consumers alike. Here are four, in no particular order:

Lesson 1: If a system can be hacked, it will be hacked. Someone will always try to get their hands on valuable information. Whether it’s the stealing of credit card numbers directly, or the selling of emails and passwords on the internet, criminal hacking is a business – a very big business in fact. So stealing Bitcoins (a currency stored in virtual wallets and not backed by any country’s currency) and exchanging them for another currency? An internet thief’s dream come true. The same is true for any company really: If there is sensitive data to be had, it’s only a matter of time before someone goes looking for it.

Continue Reading

What Will Emerging Tech & Wearables Do to Data Security

Emgering tech securityMobile apps aren’t just a fad, they’re the content delivery and computing future. A recent report from Gartner says that by 2017, users will have downloaded apps more than 268 billion times and generated a revenue stream of over $77 billion, and Gartner predicts that wearable devices will drive 50% of total app interactions at that time. Apps won’t just be on smart phones and tablets, but on everything from home appliances to wearable technology – even our cars and homes will consume and run apps. We already see some of this today with fitness trackers and smart thermostats, but expect to see a lot more!

Cognizant computing is another huge theme in Gartner’s predictions. (Cognizant computing is when tasks and services are performed automatically on behalf of the user, often based on collected data and statistics on that user, such as preferences and past actions.) Many apps for things like appliances will need to run passively (refrigerators need to run all the time!) which is where cognizant computing comes in. The apps will learn how we work as individuals and tailor their function to suit their owner’s needs.

While this is fantastic news for app creators and market places, it also presents a new set of challenges – how do you create, test, secure, and maintain these sort of apps? How will apps integrate with each other? How will data be shared securely and efficiently? How will they look and feel when a user interacts with their visual display or partner program?

Data security might be the number one concern moving forward. We’ll suddenly have new categories of potentially sensitive data to keep safe, while still allowing our apps learn from it. Data points like our weight, eating habits, energy usage, and even when we leave the house are sensitive bits of information that need to be handled with care. In the case of Nest, the “smart” thermometer company, Google announced a couple weeks ago that it is acquiring them. Immediately, Nest posted on their blog, clarifying how customer data would be shared with Google, explaining that the data would only be used to improve Nest products and services, but not all companies share the value of “do no evil.”

The Huge Cost of a Data Breach

Software SecurityIt’s no secret that some retailers – and their shoppers – had a not so happy holiday season. Several big name retailers (and even a few restaurants in uTest’s backyard, the Boston area) suffered data breaches, leaving millions of customers with compromised information. Exactly how much these companies have suffered because of the security debacle remains to be seen, but the projected price tag isn’t pretty.

MarketPlace did a story on data breaches and came across a startling number. According to Larry Ponemon, Chairman of Ponemon Institute, Target’s data breach could cost the company “around $760 million.”

You know that poor security could cost you, but did you have any idea that it could be to the tune of hundreds of millions of dollars? Even smaller companies with fewer customers will feel the sharp consequences of a data breach.

[Ponemon] says his studies have shown that PR is hugely important in a data breach and the worse its handled, the more customers a company is likely to lose. “It’s called churn: How many  people will stop being your customer as a result of data loss or theft? It can be more than half of the total cost of a data breach.” …

But waiting for all the facts can trigger costs of its own, like lawsuits and fines says Ted Julian, Chief Marketing Officer with CO3 Systems, which helps companies manage data breaches. “There are substantial privacy breach disclosure requirements,” he says. “Failure to meet those can trigger fines which can add up quite substantially.” Julian says there are strict state and federal rules about how soon you have to report a data breach and companies have to get smart about it quickly. All companies.

Between major financial loses, potential fines and lawsuits and losing enough customers to account for half the total cost, a data breach could be catastrophic for any company. These major retailers likely perform some degree of security testing, but let these stories serve as a reminder that your business – and all the apps, data and software that goes with it – doesn’t live in a sterile environment behind a firewall. It lives in the real world where people looking to do harm can find a chink in your security armor big enough to damage any company. Security testing with an expert who can think like a hacker should be part of your regular business maintenance – it could save you millions in the long run.

A Lesson from Apple: Always Ask for User Consent with In-App Purchases

Apple Refunding CustomersAs you might have heard, Apple will be issuing refunds – to the tune of $32.5 million – to customers whose children made in-app purchases without consent. The settlement was made with the FTC who filed a complaint about Apple’s unfair billing practices.  The complaint states that a user’s iTunes password is stored by an app for 15 minutes after having entered it, during which time the app would use the password for in-app purchases, often without having informed account holders. (Even when a pop-up requesting a user’s password appeared, it often did not explain that the password would be used to make or authorize purchases.) What that boils down to is tens of thousands of payment transactions initiated by children without the informed consent of their parents. As part of the settlement, Apple must change its billing practices to ensure it has obtained an account holder’s “express, informed consent” before charging them.

It may not seem like that big of a problem until you take a closer look. Many of the problems were payments sent to apps listed in the “Kids” and “Family” categories and are designed and marketed towards children (like Dragon Story, Tiny Zoo Friends, Tap Pet Hotel, Racing Penguin, Flying Free, and many others) and these games do not paint a clear picture of the difference between real money and in-game currencies, which is innately confusing to younger minds. One customer reported that her daughter spent $2,600 in one app alone. Most of the complaints state that the children were completely unaware that they were spending real money at all.

This sends a clear signal to app developers and distributors:  design your app with clear language explaining when and where charges will be incurred, and always assume lack of consent until it’s been given on a per-purchase basis. Make sure you understand your audience and stay away from design and verbiage that may confuse your users as to which costs are real and which are fake. Google Play, for example, asks for a user’s password on every purchase unless you specifically opt out of that security feature, but depending on a game’s messaging, you may not know that something has a real money cost until that screen even pops up.

The settlement proves that app makers and distributors have a responsibility to their user base to provide a safe, secure product that limits the chance of unauthorized use and does not assume consent.

Snapchat Fumbles the Privacy of 4.6 Million Accounts

stanford_band_snapchatIt hasn’t been a great start to the new year for Stanford.

Yesterday, its football team lost to Michigan State in the Rose Bowl. At halftime its band formed the silhouette of a ghost with its tongue sticking out – a reference to Snapchat, which was founded by alums. Sadly, those founders had little time to appreciate the shout-out because they’ve likely been busy trying to weather a privacy storm.

If you haven’t yet seen the CNN, USA Today, or TechCrunch headlines, Australian hackers Gibson Security disclosed two exploits in the Snapchat app. Snapchat is the sixth most downloaded free app of 2013 in the Apple AppStore, ahead of Instagram, Twitter and Facebook, and was downloaded more than 10 million times within Google Play alone. So losing 4.6 million usernames and phone numbers represents a significant share of its US user base.

So what drove these hackers to publish Snapchat’s iOS and Android APIs and the two exploits? Apparently Gibson Security originally notified Snapchat in August about the vulnerabilities and grew frustrated when the exploits weren’t patched after four months.

According to an email correspondence with ZDNet, Gibson Security explained:

“[Snapchat could have fixed this] by adding rate limiting; Snapchat can limit the speed someone can do this, but until they rewrite the feature, they’re vulnerable. They’ve had four months, if they can’t rewrite ten lines of code in that time they should fire their development team. This exploit wouldn’t have appeared if they followed best practices and focused on security (which they should be, considering the use cases of the app).”

The 4.6 million usernames and phone numbers were posted yesterday on a site called SnapchatDB.info (now a suspended domain), which explained to TechCrunch:

“Our motivation behind the release was to raise the public awareness around the issue, and also put public pressure on Snapchat to get this exploit fixed. It is understandable that tech startups have limited resources but security and privacy should not be a secondary goal. Security matters as much as user experience does.”

So how will this impact Snapchat’s user satisfaction levels? Probably not significantly. Snapchat’s Android and iOS apps are currently rated a 43 and 48 in Applause Analytics, respectively. Somewhat predictably, security is its lowest rated attribute with identical 24s across both platforms.

Let’s hope this disclosure drives a change in development and testing practices. As of this morning, Snapchat lists 5 open positions but none security- or testing-oriented.

8 Biggest Security Threats of 2013

Software SecurityWe’ve done a pleasant end-of-year recap, so now it’s time for one that will motivate you for next year through fear. As another year winds down it’s time for the annual roundup of the worst security issues to spring up in the past 12 months.

This year’s list comes to us from CSO, who spoke to security executives and industry analysts and came up with this list that highlights eight risks you should be extra careful to guard against next year. Some of these are constant but growing threats, some should be common sense and some might take you by surprise.

More Sophisticatede DDoS

“Prior DDoS attacks leveraged the many thousands of personal computers that a typical botnet herd might utilize for the their attack engine,” [John South, CSO at Heartland Payment Systems] says. “However, the huge multiplier in the newer efforts were botnets that consisted of compromised server-class equipment with much more capacity and horsepower.”

Where a typical DDoS attack in 2012 might range into 3 or 4 Gbps, South says, the new attacks have bursts of more than 100 Gbps.

Attack of the Botnets

Whereas the phishing attempts several years ago might have been replete with spelling and grammar errors, “the phishermen today have upped their social engineering skills and coupled these with much more credible messaging,” South says. “Their success in compromising computer systems, and in turn accessing personal identity, credit card and bank account data, is illustrated in the increasing number of account takeovers that were seen in 2013.”

Ignored Insider Threats

“Many Web-facing organizations are strictly focused on external threats, which include espionage agents, saboteurs, and cyber criminals,” [Michael Cox, president of SoCal Privacy Consultants] says. “However, businesses are constantly being surprised by breaches caused by workforce members and third-party services providers.”

Since these trusted parties have the greatest access to sensitive information, the average cost of breaches caused by trusted parties is greater than those caused by external threats, Cox says. “The false sense of security organizations have with trusted parties has allowed breaches by these actors to grow more rapidly than those by external threats.”

Continue Reading

Real World iOS App Security Bug

ios-hackThere’s an old saying in the military that “generals always fight the last war, especially if they have won it.” Quite simply it means that when preparing for the next kind of threat, one should resist looking too closely at past wars fought at different times with different technologies and circumstances. The French were overrun despite the Maginot Line – the perfect defense against further aggression by the Germany of World War I, but practically useless against the German Blitzkrieg of World War II.

So it is with app security. Early in the era of web security, there was still a strong fear of classic desktop vulnerabilities like stack smashing and buffer overflows. Now that we’re in the era of mobile apps, the security world is still stuck in the realm of web security. Despite the fact that it’s been over two years since the OWASP group released the top 10 vulnerabilities for mobile, few mobile developers think to security test their apps.

Last week, researchers revealed yet another vulnerability type for iOS applications. By using a man-in-the-middle approach, an attacker can trick an iOS app into communicating with a web API on a malicious URL, not just once but forever after the fact. Ars Technica has the details of the attack, discovered by the security group Skycure, but the summary is simple.

As it turns out, many iOS apps do not have strong protections against malicious 301 redirects for API calls. A 301 redirect is basically a web command that says, “This thing isn’t here anymore. It’s over there instead.” Web masters use it when moving content from one place to another so that links work smoothly even when the content address has changed. But if I can interfere with your app’s communications and say “your API is really over here on my malicious server,” then I can theoretically intercept or even modify content used by your app. Because the 301 code signals a permanent redirect, your app will continue to use my malicious server long after I have stopped directly interfering.

The fix is simple. Make sure your app uses SSL instead of communicating in plaintext. It’s odd to think that in this day and age anyone would use an unencrypted API, but here’s the perfect reason to do so. By going the extra step to encrypt your API’s traffic, you’re also making sure that your users only see the right content and not something malicious or bogus. SSL certificates are cheap compared to the cost of your app losing its reputation because of security problems.