Bot-driven fraud is perpetrated by automated software agents that are capable of interacting with online businesses in a very human-like way. Attackers continue to get more cunning as they understand the procedures and defenses businesses have implemented to prevent security breaches through bot-driven fraud attacks. Fraudsters have devised more stealthy techniques to enter and sometimes corrupt systems by mimicking human behavior.
Many bots are effective due to sheer volume; only a fraction of them need to be successful for the fraudster to make money. These types of attacks are carried out by generally unsophisticated, simple programs. Arkose Labs detected a 70% rise in bot-driven attacks attempting new account registrations over just one quarter at the end of 2019.
Why do attackers use bots?
There are more advanced bots that can mimic human behavior with a high degree of accuracy. Whatever the skill level, bots are primarily used for three specific reasons in the fraud attacks ecosystem:
- Preparatory activity for downstream attacks, such as credential testing
- The primary avenue for an attack — e.g. credential stuffing
- To evade known anti-fraud defenses at scale
Different bot attack types have their own distinct paths to monetization. Much of the low-value, high-volume activity will have very minimal success rates and be dependent on being able to execute at enough scale to ultimately drive profit. These bot-driven fraud attacks may include sending spam messages at scale, where only a few malicious links out of hundreds need to be clicked to make the attack profitable for the fraudster.
Bots are also used for indirect monetization – attacks that don’t themselves cause financial losses but actually lay the groundwork for future monetization for the fraudster. Attackers will target customer touch-points within platforms and online gaming systems, beyond the typical attack points of new account origination, account login, and payments. Fraudsters can make money on these touch-points in many ways. These attacks can create fake reviews, upvote or downvote videos, or abuse in-platform economies in online gaming.
Recommended Solution Brief: API Abuse: Protect APIs From Bots Impersonating Legitimate Traffic
New advancements on fraud attacks
Attackers have done their homework; they know the processes and defenses that businesses have in place to prevent fraud attacks and how to overcome them. Bots can also be combined with human-based activity to launch attacks that can be extremely difficult for businesses to detect, let alone stop.
According to new data, US eCommerce transactions have increased by 49% since April 2020, compared to the baseline period in early March. Consumers are transacting through a number of different channels — desktops, laptops, mobile devices, and gaming consoles. This provides a number of different entry points for fraudsters to target. This traffic which powers the modern digital economy flows through APIs leads to additional attack surfaces as these can also be directly targeted using bots mimicking traffic coming from a legitimate source.
The ever-changing attack surface
Not long after fraudsters attack external forms to successfully perform ATO attacks or set up fake new accounts, they begin carrying automated abuse within applications or platforms when they are signed in. For example, they send automated spam from a cloud email account or run automated sessions within online games in order to accumulate in-game assets that can be sold off.
Businesses must ensure that all customer touchpoints across all these potential attack surfaces are secured. Real-time assessments combined with targeted enforcement challenges are the only way to facilitate confidence that bots are not successfully attacking your business.
Download our Ultimate Guide to Bot Prevention to explore this topic further, and to learn how Arkose Labs can help combat this increasing trend.