Skip to content
Arkose Titan

Protect every account with adaptive defenses against bots, fake users, phishing, scraping, and account takeovers.

Arkose Bot Manager
Arkose Edge
Solutions
By Use Case

Defend your platform from account takeovers, fake signups, API exploits, SMS fraud, and evolving attack techniques.

By Industry

Arkose Labs tailors account protection for banking, fintech, gaming, retail, travel, and other digital industries.

Why Arkose

Arkose Labs protects account integrity and builds customer trust with adaptive defenses and proven enterprise results.

Resources

Access Arkose resources—reports, case studies, webinars, tools, and expert insights to protect accounts and platforms.

Company

Learn about Arkose Labs—our leadership, partners, careers, and mission to secure digital experiences.

  • About us
  • Leadership
  • Careers

Searching...

No results found for ""

Try different keywords or check spelling

The Economics of Fraud Have Changed. Here’s Why.

AI-Fraud-Detection-Construct-With-Anomalous-Transaction

Fraud has always been a business. Attackers invest in tools, infrastructure and labor because the return justifies the investment. For years, the security industry’s answer to fraud was detection, and that worked well enough, not because it was perfect, but because it took attackers real time, real tools and real effort to build new ways around it. 

Agentic AI has rewritten this calculation entirely.

Deloitte’s Center for Financial Services projects that AI-facilitated fraud losses in the US will reach $40 billion by 2027, up from $12.3 billion in 2023. That is more than a tripling in four years, and it is almost certainly an underestimate. Furthermore:

  • FraudGPT is available for $1,700 per year. 
  • Open-source agent frameworks, like Browser-Use with more than 85,500 GitHub stars, explicitly market “anti-detect, CAPTCHA solving, 195+ country proxies, zero config” as features. 

A motivated attacker no longer needs a sophisticated operation. They need a credit card and a weekend.

Agentic AI has done to fraud what cloud computing did to software; it commoditizes entry-level attacks and dramatically improves the economics of sophisticated ones. The attackers who were already capable are now faster and cheaper to operate. A whole new class of attacker previously priced out of the market has been handed the keys.

This is not an incremental change in the threat landscape. It is an architectural shift in the economics of fraud.

The Cost Asymmetry Is the Problem and the Solution

The reason fraud persists is not because attackers are unstoppable. It is because attacking is cheap and the consequences of being caught are rarely severe enough to change the math.

When the cost to run an attack is lower than the value of a successful breach, attackers will keep trying. This is true whether the attacker is a human fraud farm operator, an automated bot network or an autonomous AI agent running twenty parallel sessions against your registration flow.

The only durable answer is to change the cost structure. Not to build a higher wall; walls get climbed. To make the act of attacking so expensive, so time-consuming and so consistently unrewarding that the business case for targeting your platform collapses.

This is the founding principle Arkose Labs was built on. Not detection alone. Not blocking alone. Economic deterrence in fraud prevention means making the cumulative cost of attacking a protected target – in time, compute and attacker resources – consistently exceed the value of a successful attack, so attackers abandon the target entirely. In practice: waste attacker time, exhaust attacker resources, and make the ROI of attacking a protected target decisively negative, whether the attacker is a human operator, a scripted bot or an autonomous AI agent.

The math must work against them. That is the goal. Everything else is a means to that end.

The New Dual-Use Technology Security Teams Cannot Ignore

There is a dimension of the agentic AI challenge that makes it categorically different from anything the security industry has previously faced and that most vendors are not yet equipped to address.

Legitimate AI agents are now a meaningful part of internet traffic, and that share is growing. Consumers use AI assistants to book travel, manage finances, compare prices, and complete forms. Vision-impaired and deaf/blind users rely on AI agents to navigate the web in ways otherwise inaccessible to them. Enterprises deploy AI agents to automate workflows, process data, and interact with third-party platforms on behalf of their employees and customers.

These agents are not attackers. They are legitimate users of the web, or acting on behalf of them. And at the network layer, they are functionally indistinguishable from the autonomous agents being used to execute credential stuffing campaigns, inventory hoarding attacks and fake account registrations at scale.

A security posture that blocks all automation will harm the legitimate users it is meant to protect. A posture that allows all automation will be trivially exploited. The binary choice, bot or not, block or pass, is no longer a viable framework.

The operating principle that replaces it: not all agentic traffic is a threat. The goal is classification and control, not blanket blocking. This requires what Arkose Labs calls the three-tier agent classification framework, a classification model for the three distinct types of agentic traffic on any platform:

  • Self disclosing good agents: self-identifying, operating within authorized parameters, acting on behalf of legitimate users including those relying on AI for accessibility
  • Malicious agents: masquerading as legitimate, running fraud at machine speed across account creation, login, payment and API flows
  • Non-disclosing good agents: helpful to end users but not necessarily to the platform: zero-click search bots, gray-area scrapers, agents whose intent is ambiguous without deeper behavioral signal

Visibility across all three tiers is the prerequisite for intelligent policy. Without it, you are either blocking indiscriminately or flying blind.

Gartner predicts that by 2028, 25% of enterprise breaches will trace back to AI agent abuse. The organizations that survive this transition will not be the ones that build the highest walls against automation. They will be the ones that learn to distinguish between authorized, intent-aligned agents and those operating outside acceptable parameters, and build their enforcement economics accordingly.

What an Effective Response Actually Looks Like

Addressing the agentic AI threat requires a different kind of platform; one built on three capabilities that most point solutions do not provide together.

First, visibility across the full traffic picture. You cannot govern what you cannot see. Security teams need the ability to inventory every type of traffic – human, bot and AI agent – and understand not just that it is present but what it is attempting to do and whether that intent is aligned with legitimate use. This requires signal collection that goes beyond IP reputation and fingerprinting to capture behavioral context across the full session.

Second, intelligence compounds. Static rules decay. The attacker who is blocked today writes a new script tomorrow. What makes agentic AI particularly dangerous is its ability to adapt,  iterating on bypass strategies autonomously and continuously. The only effective response is a detection model that learns as fast as attackers evolve, drawing on global session data to identify new patterns before they become widespread. Intelligence that only sees your own traffic is intelligence that will always be one step behind.

Third, enforcement economics, not just enforcement actions. Blocking is a tactic. Disrupting fraud economics is a strategy. Every attacker is running a cost-benefit calculation. The goal is not to make any single attack fail. It is to make the cumulative cost of attacking a protected target exceed the cumulative value of a successful breach. When that threshold is crossed, attackers move to easier targets. They always do.

This is the architecture behind Arkose Titan: a unified platform that brings detection, challenge enforcement, device intelligence and economic deterrence together through a single API call. Not point solutions that leave gaps between them. Not static rules that require manual updates to keep pace with evolving threats. A system where every session, regardless of outcome,  generates intelligence that makes the next session harder to exploit.

The Path Forward

The security industry spent a decade optimizing against a threat that was scripted, predictable, and constrained by human operators. That era is over. Agentic AI has given attackers speed, adaptability and scale that no human-operated fraud farm could match, and made entry into the fraud market cheaper than ever.

The response to that shift is not a better CAPTCHA. It is a fundamentally different model of enforcement, one where the economics of deterrence are designed to make attacking a protected target cost more than it’s worth, at every layer of the stack, for every class of attacker.

That is what Arkose Labs was built to deliver. The rest of this series explains exactly how.


This post is the first in a seven-part series examining how Arkose Labs has engineered a response to the agentic AI threat. Upcoming posts:

Blog 2: We Are Not a CAPTCHA — Why the Turing test model is obsolete

Blog 3: What Attackers Taught Us — Proprietary attacker data that shaped MatchKey

Blog 4: Inside MatchKey — Architecture designed to make attacks economically irrational

Blog 5: The Audio Challenge — The only audio challenge that is both accessible and secure

Blog 6: The Engineering Beneath the Challenge — How CAPI v4 closes the vectors agentic AI enables

Blog 7: Enforcing Intent in a World of AI Agents — Not “is this a bot?” but “is this agent authorized?”