Posted on 05/15/2013 in Security Testing
by Stanton Champion
Are you a Linux system administrator? Are you running a kernel that’s newer than at least 2.6.37 (or 2.6.32 on CentOS)? Then you might want to pay attention. A newly discovered kernel vulnerability allows users to escalate their privileges to root level. That means that anyone who has access to the command line can gain root access on just about any recent Linux system.
Ars Technica outlines some of the major issues, while Reddit has a deeper and more technical explanation. Here’s a quick summary:
Deep inside the kernel, in the performance counters subsystem, is a rather innocuous looking signed integer variable. However, in other portions of the code, the same variable is treated as an unsigned integer. The problem is that it’s possible for a user to provide a very large unsigned integer to the process, and when that unsigned integer encounters the signed integer, it gets transformed into a negative number. That means that while I might input BIG_NUMBER, once it finally percolates through the code it becomes -DIFFERENT_BIG_NUMBER.
If you’re a former C developer, you’re probably starting to see how this goes wrong. This particular integer just so happens to be used as an array reference, and C is perfectly happy to reference anywhere in memory an array reference says to look. A clever attacker can use the incorrect negative index to write data into invalid portions of memory which are eventually executed by other processes. Both the bug and the exploit are pretty classic and standard stuff.
The fix is remarkably simple: change the signed integer to an unsigned integer. Most people will need to contact their Linux vendors to get an updated version of the kernel, while those who are true diehards can of course compile a kernel themselves. Either way, if you manage a Linux machine of any kind, you should definitely upgrade as soon as possible. An exploit is already floating around in the wild.
More details are also available from Red Hat. This is CVE-2013-2094, for those of you keeping track of such things.
Posted on 03/08/2013 in Security Testing
, Testing - Web Apps
by Stanton Champion
A universal truth in software security is that your security can come crashing down with one person’s new discovery. So it was with several different web browsers when a clever researcher discovered a new trick to coerce a browser into filling its hard disk with garbage. All a user needs to do is browse to the wrong site on the web, and bye bye disk space.
How does this amazingly clever attack work? Feross Aboukhadijeh explains it in a recent post on his blog where he also links to a proof of concept site that really will fill up your hard drive. (The blog post link above is safe. What you click after you end up on Feross’s blog is up to you.) Here’s how the whole problem works:
HTML5 allows websites to ask a browser store information about a users’s session on the disk. It’s pretty nifty feature, expanding the power of websites to store session data beyond the miniscule amount permitted by a cookie. The HTML5 spec is also pretty clear that browsers should set a limit on how much a particular site can store:
User agents should limit the total amount of space allowed for storage areas.
What Aboukhadijeh discovered is that subdomains might not count against the same limit. That means that if my browser permits each site to have 5MB, then 1.example.com, 2.example.com, 3.example.com, etc. would each get 5MB. A clever attacker just needs to create a long list of subdomains and then coerce the visitor’s browser into loading them all at once.
So is this a bug with HTML5 or the browsers? Read more…
Posted on 02/07/2013 in Security Testing
, Testing - Web Apps
by Stanton Champion
Over the past several years, the web development community has been enthralled with Ruby on Rails. The combination of the Ruby language with the Rails framework has proven extremely powerful, and many of the web’s top sites are built using the two technologies. For example, sites like Twitter, 500px, Groupon and more were all built with Ruby on Rails as their framework. Both new and veteran developers have adopted the platform because of its ease of use, rich library of components, and outstanding tools.
Late last month, the gleam of Ruby on Rails dulled considerably as a new class of security attacks emerged targeting the framework. Like many security vulnerabilities, the attacks started out as academic exercises which were quickly spun into automated attack bots designed to knock over Rails servers en masse.
Today, anyone who runs a Ruby on Rails server who hasn’t applied an update is probably already compromised. Think that’s overstating things a bit? Patrick McKenzie sounds the alarm loudly in his blog post titled What The Rails Security Issue Means For Your Startup:
It is imperative that you understand that all Rails applications will eventually be targeted by this and similar attacks, and any vulnerable applications will be owned, regardless of absence of these risk factors.
Still think that’s overstating things? Read more…
Becoming a security tester can be tough. It requires deep training and expertise in system architecture, computer engineering, network theory, and human psychology. Learning these skills can take considerable time, and it may take years for a tester to truly become a security master.
If you are learning to be a security testers, here are 10 signs you’re not quite ready for the job:
10. Your password appears on this list.
9. Your concept of social engineering is to throw a really great party and then figure out how each person can have the best possible time.
8. You think 56 bit DES ought to be good enough for anyone.
7. You can’t remember if your doctor gave you a SQL injection with your last set of vaccinations.
6. You think Van Eck phreaking is the title of Armin Van Buuren’s latest album.
5. You start looking for a mop when you hear someone mention a buffer overflow.
4. You think phishing means getting stoned and going to a concert by that band from Vermont.
3. When you hear OWASP, you reach for a can of bug spray.
2. You think that cross-site scripting is a fancy form of calligraphy.
1. You worry that if the private key doesn’t open up a little more, it will never be accepted by its friends and public_key will always be the popular one.
Posted on 03/22/2012 in Security Testing
by Stanton Champion
It’s been almost 16 years since Aleph One published his classic article titled Smashing The Stack For Fun And Profit. In it, Aleph One (whose real name is Elias Levy) laid out a template for executing buffer overflow attacks that any computer-savvy hacker could follow. Back then, developers were more naive about writing code with rigorous boundary checking, and most applications written in C and C++ had exploitable buffer overflow vulnerabilities. With the growth of connected applications over the Internet (written in C and C++, of course), hackers and worm writers remotely felled software from giants like Microsoft, Oracle, Sun Microsystems, and others. Buffer overflows became the scary monster security vulnerability of the late 90s and early 2000s, and even today discovering a buffer overflow is the grand discovery of all security exploits – conferring black-belt status on whoever finds one.
Since then, a lot has changed. Both Intel and AMD have made a number of improvements to x86, and modern computer architectures have made it much harder to exploit buffer overflows. In addition, newer compilers and operating systems have added a number of tricks that make exploiting compiled applications more difficult.
One of those techniques is Address Space Layout Randomization, or ASLR. Exploiting a buffer overflow requires knowing the location of certain memory addresses. It used to be that those addresses were predictable for a given application, but newer operating systems can shake them up each time the app loads. It’s like shuffling a deck of cards and then expecting you to figure out which card is the queen of spades on the first try. If you shuffle it the same way every time, I’ll figure it out pretty quick. But if you make your shuffle truly random, then I’m out of luck.
Microsoft will be improving their implementation of ASLR in Windows 8 to make it much harder to predict the location of addresses for an application as well as all the supporting libraries surrounding the application. That means it will be even harder for an attacker to predict addresses, which makes buffer overflows much harder.
Want to learn more? Ars Technica has a great post about how ASLR will be used to improve the security of IE10. Also, check out this article by Paul Makowski about all the things that have changed with computer security since 1996 that make buffer overflows so much harder to exploit.
Posted on 03/02/2012 in Testing - Mobile Apps
, Testing - Web Apps
by Stanton Champion
SSL is the protocol that underlies most of the Internet’s encrypted traffic, and lately many people have begun to realize that SSL is flawed in a pretty obvious and easily exploited way.
SSL relies on certificates to setup a secure connection between computers. Generating a certificate is easy, and it’s possible to create a valid certificate for any address on the Internet. Certificate authorities (or CAs) ensure trust and prevent mayhem by validating the certificate owner is who they claim to be and then adding a signature to a certificate labeling it as legitimate.
When you visit a secure website, your browser gets a certificate signed by an authority saying that this website is authentic. The browser compares that signature against its own built-in list of known certificate authorities (and their public keys). How many authorities does your browser know about? Try more than 600!
The SSL certificate authority model works well if you assume the authority treats its super-secret private key like the gold in Fort Knox: the key is only handled by a small group of Internet priests who open the vault in a solemn ritual, remove the key, calculate a signature using nothing but slide rules and chalkboards, and then hastily return their private key to the sacred vault. Obviously, most CAs skip this time consuming and expensive process and trust their computer systems to manage their private key securely in a way that’s resistant to theft by outsiders.
If you think 600 different people can secure their data perfectly, then have we got news for you. I could throw a party for 600 of the smartest people in the world, and chances are good that one of them would forget to wear deodorant. You simply can’t trust 600 different certificate authorities to properly manage their private keys.
And this is the problem. All it takes to compromise SSL is to get access to a single private key from one of the 600 certificate authorities. Once I have that, I can create a certificate claiming to be any site on the web, and your browser will accept it without question.
Posted on 12/13/2011 in Security Testing
, Testing Trends
by Jamie Saine
Last month there were several reports of cyber attacks on water treatment plants ( Houston, TX and Springfield, IL come immediately to mind). The Springfield incident turned out to be a major miscommunication, but the Houston attack is holding strong and at least three other attacks have been confirmed by the FBI. These attacks were so real, in fact, that Michael Welch, deputy director of the FBI’s Cyber Division, recently announced that the FBI will be increasing its cyber budget by roughly 12%. Here’s a recap from Sophos’ Naked Security blog:
At a recent security conference Michael Welch, the deputy assistant director of the FBI’s Cyber Division, gave a speech where he discussed the issue of SCADA security.
Information Age magazine reported on his speech and quoted Welch as saying:
"We just had a circumstance where we had three cities, one of them a major city within the US, where you had several hackers that had made their way into SCADA systems within the city."
… It’s great that Welch acknowledges the work we have to do in this area and even went so far as to suggest the FBI will double the size of their Cyber division in the next 12 to 18 months.
Sound too good to be true? Then it probably is.
Posted on 10/10/2011 in Testing Trends
by Matt Solar
We recently wrote about the need for security testing on medical equipment, but it looks like an even larger virus threat has come to light – on U.S. Predator and Reaper drone weapons systems.
While an unofficial source said they suspect it’s benign, they also added, “But we just don’t know”. The thought of an attack drone being hacked is a chilling to say the least. Jalpnik has a nice write-up of some of their historic missions (and the virus) but this seems to reinforce the hypothesis that the United States is entering a “Code War”.
Here’s the crux:
The virus, first detected nearly two weeks ago by the military’s Host-Based Security System, has not prevented pilots at Creech Air Force Base in Nevada from flying their missions overseas. Nor have there been any confirmed incidents of classified information being lost or sent to an outside source. But the virus has resisted multiple efforts to remove it from Creech’s computers, network security specialists say. And the infection underscores the ongoing security risks in what has become the U.S. military’s most important weapons system.
“We keep wiping it off, and it keeps coming back,” says a source familiar with the network infection, one of three that told Danger Room about the virus. “We think it’s benign. But we just don’t know.”
For those interested, we have a new whitepaper on Software Security Testing.
Posted on 06/23/2011 in Testing Trends
, uTest Stuff
by Matt Solar
We’re just about halfway through the year but I’m calling it now: 2011 is the year of the hacker. Grim? Maybe. Just about every week there has been a new story about a company being hacked and it’s costing companies millions of dollars and even more for their brand reputation.
While only two of these hacks really impacted a company I use heavily, I thought I’d do a quick countdown on the top hacks of 2011 and the associated costs.
The file-sharing site opened the doors for four hours this week, allowing anyone with a login to access other accounts. It turns out that it was a self-inflicted wound and DropBox broke their own authentication system. While the finacial impact probably won’t be released, just browse through the 600+ customer comments to see how the issue and their response impacted their brand. It’s a bug, not a hack, but certainly something that could have been avoidable with ample testing prior to a full launch.
Cost: A self reported “much less than 1%” of their more than 25 million users were impacted to an undisclosed extent.
6) MovableType / PBS.org
In a pure retaliation a group of hackers targeted PBS.org in response to an episode of Frontline’s portrayal of of WikiLeaks leaker Bradley Manning. The hackers gained control of PBS.org and republished false information. PBS was not able to immediately regain control and was forced to utilize their Facebook page as their primary news source.
Cost: One of their Sr. Correspondents, Judy Woodruff, wrote a post on “Calculating the Cost of an Attempt to Silence the Press”. While they didn’t disclose any financial costs or specific user information loss, it has certainly been a struggle for them to regain control of their site and all of their content.
Posted on 02/01/2011 in Testing Trends
by Stanton Champion
Who wouldn’t like the idea of cracking the lottery? Just figure out the code, and incredible riches can be yours! But the lottery is unbreakable – audited by governments, contractors, corporations, and independent agencies; or at least that’s what they want you to think.
A professional statistician named Mohan Srivastava managed to discover a flaw in certain kinds of scratch-off lottery games that allow a player to get a winning edge by doing some simple math. Wired has the whole story, and it’s well worth reading. The summary is this:
Scratch-off lottery tickets aren’t totally random. A computer prints the tickets so that a certain number are guaranteed to win – thus meeting the odds requirements set by the laws of different states. That means that a computer program has to spit out both winning and non-winning scratch-off lottery tickets. The game that Mr. Srivastava cracked had two components – a visible grid of numbers and a scratch-off section with more numbers. You play the game by scratching off the hidden section and looking for for tic-tac-toe patterns in the grid.
What Mr. Srivastava realized is that the winning tickets had a slightly different statistical distribution of data in the grid section than non-winning tickets. Knowing this, he could pick out winning tickets with 90% certainty, all without scratching a single lottery ticket.
What are some lessons for testers?