We’ve all seen them. We’ve all used them. And we’ve all learned a new word or two in the process. I’m talking of course about captchas, which are used to ensure that responses are generated by a person, not an automated script. The most popular use case involves (I think) changing passwords.
Anyway, if you’ve ever doubted the effectiveness of captchas, you’re certainly not alone. Here’s a great story from CNet.com on captcha fail:
A team of Stanford University researchers has bad news to report about captchas, those often unreadable, always annoying distorted letters that you’re required to type in at many a Web site to prove that you’re really a human.
Many Captchas don’t work well at all. More precisely, the researchers invented a standard way to decode those irksome letters and numbers found in Captchas on many major Web sites, including Visa’s Authorize.net, Blizzard, eBay, and Wikipedia.
Their decoding technique borrows concepts from the field of machine vision, which has developed techniques to control robots by removing noise from images and detecting shapes. The Stanford tool, called Decaptcha, uses these algorithms to clean up the image so it can be split into more readily recognized letters and numbers.
What do you suppose could be the main reason why captchas have been less than effective? You guessed it: