Fraud Prevention

Automation: The Cost of Being Human

June 13, 20193 min Read

Cybersecurity is exploding. Beyond technical innovation and breaches that have changed the course of history — the mainstream is listening. Now more than ever, people are mindful of their online security just as the locks on their front doors. And brands are paying attention. In the fallout from catastrophic incidents like the WannaCry ransomware attack, and Equifax data breach, brands have scrambled to meet our growing demands for protection, privacy, and integrity. We demand as much as we know to. And yet, we are losing the battle with what it means to be human online.

In September, I took FunCaptcha to the floor at AppSec USA. No one really expected to see a CAPTCHA at a respected conference for software security leaders and researchers. For most, the mere notion of CAPTCHA security is an oxymoron. While allured by our Aussie swag, it goes without question that attendees left with more than just a Tim Tam. We were met with incisive questions and vacillating scrutiny of the technology that has failed them for 20 years. This kind of inquiry gave us an opportunity to settle the score on CAPTCHA, and to transcend two decades of second-hand frustration — before we even mentioned FunCaptcha. As we drilled down on the longstanding beliefs about CAPTCHA, the question that always remained was “What difference does it make to me?” If we distil this question even further, it begs an answer that puts a value on what it means to be human in online spaces.

The following month, I was invited to present a lightning talk for Ignite at O’Reilly Security Conference. I set out to answer the enduring questions by showing an effect CAPTCHA has on all of us. An effect that’s never been spoken about publicly, and a vulnerability that exists without us even realizing it: The Ghost Queue — a verification loophole where bots manipulate what humans can buy, and how much they pay. I gave one example where cheap airfares are reserved indefinitely by bots that never transact. These nefarious bots work for competitive airlines and/or airfare aggregators and force us to buy more profitable fares. It struck a chord in the room. How could this kind of automated abuse exist without people knowing? And more importantly, why was it still happening? I had answers for these questions too, and showed how bots exploit the same vulnerability in a range of online spaces. And what began as a 5-minute presentation turned into 2 days of conference chatter.

How could this kind of automated abuse exist without people knowing?

Consequently, attendees approached our booth with a new perspective. For them, we had reframed the discourse on automated abuse from the immaterial cost of bots, to the cost of simply being human. No longer was the value of online verification lost in the abstract; CAPTCHA was relevant and tethered to their wallets. They simply could not compete with bots, which possess a far superior ability than humans in online spaces. These truths opened the floor to a new kind of dialogue about CAPTCHA. Attendees wanted to understand the flaws in online verification that have failed to stop bots, and perpetuated distrust in CAPTCHA. They also held us to account, demanding more than just our word on how FunCaptcha solves this complex problem. In fact, we remain the only anti-bot service in the world to meet this demand, offering a guaranteed SLA against automated abuse.

This problem isn’t going away, and neither are we.