Testing the Limits with Dave Ferguson, Application Security Expert: Part I

Our guest in this installment of Testing the Limits is Dave Ferguson, a former software developer and specialist in Application Security since 2006. As a consultant, he tested for security holes in countless web applications. Dave also taught developers about security in a formal classroom setting to help them understand how to write secure code. For three years, he held QSA and PA-QSA qualifications from the Payment Card Industry Security Standards Council (PCI-SSDave Ferguson, Testing the LimitsC).

Dave currently serves as the Application Security Lead at a multibillion dollar travel technology company in the USA. You can find him on Twitter or over at his blog.

In the first part of this two-part interview, Dave talks about where organizations’ apps are most vulnerable today, and how he contacted a top-tier streaming media company about a major hole in their security.

uTest: You’re a web application security professional. How and why did you break into this subset of security?

DF: I was an application developer and manager for over a decade, and didn’t give much thought to security at all. In fact, I’m sure I coded my share of vulnerabilities over the years. Eventually I discovered this knack for finding unexpected bugs in our software, such as URL manipulation to view another person’s data. It wasn’t my job to test for security bugs. It just came from being a curious fellow and wanting to understand how the application would behave if I tried to do “X” as an end user. The QA teams were certainly not finding these types of bugs. In 2004, I decided to pursue a career in the field of Application Security and by 2006, had a full-time job doing penetration testing of web applications.

uTest: As an AppSec lead, you probably do a lot of evangelizing and mentoring. At your current job, you helped roll out a company-wide, security awareness initiative known as “Security Tech Talks.” Could you tell us more about what you try to make your peers aware about?

DF: In our Security Tech Talks, we bring in industry experts to describe common security holes in software and how to prevent them. The basic idea is to help developers understand there are 3 “pillars” of successful software: Functionality, Performance, and Security. Development teams are typically really good at the first two. We hope to raise awareness about the Security pillar. The bottom line is that any application, even if it’s high-performance and perfectly functional, can’t be labeled successful if it also allows a hacker to steal database records or lock out users, for example.

uTest: There are many ways someone can exploit software, from taking advantage of weak encryption to SQL injection attacks. Where are organizations’ applications most vulnerable today?

DF: An organization called the Open Web Application Security Project (OWASP) ranks the most critical vulnerabilities in applications. The list is called the OWASP Top 10. It is an industry standard and the best starting place when it comes to understanding application security flaws. SQL injection is the number one vulnerability, followed by broken authentication. Cross-site scripting (aka “XSS”) comes in at number three, but interestingly, it is by far the most common vulnerability in web applications. The damage potential from a SQL injection exploit is typically much higher, however.

uTest: And why are these organizations still so vulnerable even after all of the high-profile breaches as of late?

DF: First, students in computer science are taught very little about security or secure coding principles while in school. Second, it’s the nature of software development today. Everyone is focused on delivering a set of functionality by a certain date. The end result is imperfect. Inevitably you have functional bugs, which you can think of as intentional, flawed features. These will be found by the QA team and get fixed. However, you also end up with unintended “hidden” functionality that no one is looking for. These can be the security holes that allow an attacker to steal your users’ data, corrupt the integrity of the system, deface the site, or take down the application via denial of service.

uTest: A top-tier streaming media company was one such organization you contacted in 2006 about holes in their website. How did this conversation come up and what did you talk to them about?

DF: Back in 2006, I had just learned about a vulnerability known as cross-site request forgery (“CSRF”). I was also a subscriber of their site. One day, while logged in, I noticed that the HTTP requests did not seem to have any protection in place to guard against the CSRF-type attack I had learned about. Sure enough, after a few minutes of testing, I confirmed that CSRF could be exploited to do all sorts of nefarious things, such as changing a user’s address or fully taking over a user’s account by changing their email address and password.

I created a proof-of-concept attack and tried to contact them to show them what could be done. It took about a week to find someone who would listen to me. Basically, that person said “Hmm…OK, thank you,” and hung up. After about a month, I saw their site had changed. A randomized and changing token was added to any HTTP request that performed an action. This was the missing CSRF protection they needed. I like to think I was the one who taught this company that there was such a thing as CSRF!

In the end, I was glad they didn’t accuse me of “hacking” them. Other organizations have done just that to well-meaning people who notified them about their security holes. It’s been an industry problem. Security researchers perform a valuable service, and it’s not right for them to be accused of illegal hacking when they’re trying to help by informing the organization that they have a security hole.

Be sure to stay tuned for Part II of our Testing the Limits interview with Dave next Monday.

Trackbacks

Leave a Reply