The Financial Cost of Agentic AI Fraud
The $40 Billion Threat That’s Just Getting Started
In April 2025, a coordinated cyber attack hit Australia’s largest pension funds. Cybercriminals used stolen credentials to access retirement accounts at AustralianSuper, Rest, Hostplus and others — draining AUD $500,000 from four members in a matter of hours.
The attack wasn’t sophisticated. It was credential stuffing: automated scripts testing leaked username-password combinations against login pages. No zero-day exploits, no advanced malware. Just industrialized password reuse.
And this illustrates precisely what should concern you. If basic automation can steal half a million dollars from regulated financial institutions in a few hours, imagine what happens when those scripts become autonomous agents – systems that reason, adapt and make decisions on their own without human intervention or oversight.
The economics of fraud have fundamentally changed, and this evolution is likely to only gather momentum. What once required sophisticated criminal networks and significant technical expertise can now be accomplished by anyone with $1,400 and a willingness to subscribe to cybercrime-as-a-service (CaaS) tools like FraudGPT.
Cyber fraud has always been a rapidly evolving sophisticated industry but with the wider adoption of generative AI across businesses and consumers, the landscape is changing faster than ever. We are now entering the era of agentic AI fraud—where autonomous systems don’t just assist attackers, they are the attackers. It is not completely developed yet, but everything in the threat landscape suggests we are possibly months away and not years.
What Makes Agentic AI Different
The Old Model
Traditional fraud automation is blunt—bots hammering login pages, scripts testing stolen credentials. Defenders could spot them by mechanical patterns and inhuman velocity.
The New Model
This is still a very prominent problem many businesses and users are fighting, but agentic AI changes everything. These Agentic AI systems make autonomous decisions, adapt to defenses in real time, and orchestrate complicated multi-step attacks that previously required human judgment and skilled human operators.
Arkose Labs’ research quantifies this shift. The Quarterly Threat Intelligence Report, Released Q3 2025 found that malicious traffic surged nearly 20% from Q1 to Q2 2025 as cybercriminals intensified efforts across industries. It is evident that the explosive surge in malicious traffic is a clear signal that attackers are scaling operations faster than ever before.
The shift is also toward precision over volume. Arkose Labs’ Quarterly Threat Actor Patterns, Released Q4 2025 revealed that the gig economy sector experienced 51% fewer attacks but 49% more malicious traffic—a 300% increase in average attack size. Attackers are abandoning scatter-shot approaches for high-impact concentrated strikes against high-value targets.
This should serve as a clear warning – fraudsters are not just getting smarter, they’re getting bolder and this is bound to get worse with agentic AI attacks.
The appetite has always been there, but now the tools required are already available and are becoming more inexpensive by the week. Programs like FraudGPT and others provide AI-assisted attack capabilities to anyone willing to subscribe. The barrier to entry is reducing all the time while the potential reward for malicious agents is arguably higher than ever.
The Numbers That Should Keep You Up at Night
Let’s start with the damage already done. According to the Federal Trade Commission’s data, consumers lost over $12.5 billion to fraud in 2024. But there is deeper reason for concern: financial losses jumped 25% even as the number of fraud reports held steady at 2.3 million. Malicious attacks aren’t just growing, they are becoming devastatingly more effective and precise.
And it feels like this is just the beginning. Deloitte’s Center for Financial Services projects that generative AI-enabled fraud could be the main driver behind an anticipated surge in fraud losses in the US – from roughly $12 billion in 2023 to $40 billion by 2027—a compound annual growth rate of 32%. That projection was made before agentic AI capabilities became widely accessible. It may prove conservative.
The AustralianSuper fund attack. The Snowflake breaches from April-June 2024. The 26 billion monthly credential stuffing attempts. These are not agentic AI driven attacks — they are merely the warm-up act. They reveal the attack surface, the weak points, the gaps in authentication and monitoring that autonomous systems will exploit with ruthless efficiency.
And this is widely acknowledged across the business world. Experian reports that nearly 60% of companies saw their fraud losses increase from 2024 to 2025, and 72% of business leaders now view AI-enabled fraud as a top operational challenge.
The Rapidly Multiplying Billion-Dollar Threat in Numbers:
- Consumers lost $12.5B to fraud in 2024 (FTC)
- U.S. fraud losses projected to reach $40B by 2027, driven by generative AI (Deloitte)
- 60% of companies reported increased fraud losses from 2024 to 2025 (Experian)
The Confidence Gap: Prepared for Yesterday’s Threat
AI and GenAI have been top of the agenda for many security teams across enterprises spanning nearly every industry. Many of those companies think they are ready, but they are not.
Arkose Labs’ November 2025 study, “AI Maturity in Cybersecurity Report,” exposed a dangerous disconnect:
- 8 in 10 enterprises report improved cybersecurity posture from AI adoption
- 44% feel “very well prepared” for AI-powered volumetric attacks
- Enterprises dedicate one-third of cybersecurity budgets to AI
- Yet most organizations cannot distinguish between legitimate and malicious AI agents
The problem with agentic AI fraud isn’t awareness — it is capability. Automated tools, human-driven fraud, and early forms of agentic AI now appear in similar proportions in attack traffic as malicious actors expand their arsenal of attack vectors. Attackers can pivot between methods the moment any single control becomes effective.
Our CEO and Founder Kevin Gosschalk put it directly: “The rapid evolution of agentic AI has exposed a critical gap in enterprise readiness. Most organizations lack the tools to distinguish between legitimate and malicious automation.”
This sentiment is echoed by Kathleen Peters, Experian North America’s Chief Innovation Officer for Fraud and Identity: “It’s not enough anymore to say that it’s a bot, so we need to stop this traffic. Now, we need to say, ‘Is it a good bot or is it a malicious bot?'” In Experian’s 2026 Future of Fraud Forecast this top-threat phenomenon is labeled “machine-to-machine mayhem” — bad bots blending seamlessly with good bots.
Know Your Agent
Here is why it is much more complicated than blocking automated traffic altogether: legitimate agentic AI traffic is only going to grow in volume.
One prime example explaining this challenge is agentic commerce — where consumers delegate purchasing to AI agents — which is creating new fraud vectors. PayPal launched agentic AI commerce services in October 2025. Amazon is testing and expanding “Buy for Me” AI assistants. Mastercard and Visa are rolling out agent-enabled offerings.
This presents a novel security challenge: if legitimate AI agents can transact on behalf of users, adversarial agents can too. The transaction may be technically valid even when the initiating logic is malicious.
This is why Arkose Labs is leading the push for ‘Know Your Agent’ (KYA) frameworks — an emerging security standard, analogous to Know Your Customer (KYC), for verifying whether automated traffic is authorized or adversarial. Just as KYC established trust protocols for human identities in financial services, KYA applies that same logic to AI agents: confirming their origin, ownership and whether they are operating with legitimate user authorization.
Traditional signals that distinguish bots from humans don’t apply when automation is legitimate. Security teams now face questions about permission, identity, provenance and scale.
The Detection Gap
According to Interpol, the financial industry detects only 2% of global financial crime flows despite increasing spending by up to 10% annually in some advanced markets over the last decade.
Many security teams across industries have found that AI-powered detection helps. The U.S. Treasury‘s machine learning systems prevented and recovered $4 billion in fraud in FY2024, up from $652.7 million in FY2023. However, defenders are often playing catch up in a race where attackers set the pace. The risk is that there are potentially significant volumes of fraudulent activity left undetected and therefore unmitigated.
Regulatory Pressure Is Building
Regulators aren’t waiting for the first major agentic AI fraud cases, they are trying to pre-empt them and potentially prevent them.
United States
In the U.S., the Department of Treasury’s Financial Crimes Enforcement Network (FinCEN) issued Alert on Fraud Schemes Involving Deepfake Media Targeting Financial Institutions in November 2024. The FinCEN-issued guidance (FIN-2024-Alert004) requires financial institutions to flag synthetic media fraud with “FIN-2024-DEEPFAKEFRAUD” in SAR filings. Red flags include identity document inconsistencies, device/geographic mismatches and suspicious transaction patterns.
European Union
The EU introduced the EU AI Act on 1 August 2024 which is being implemented in phases and includes major provisions for high-risk AI systems with the full Act becoming applicable on 2 August 2026. According to the Act, high-risk AI systems in financial services — including credit scoring and insurance pricing — must comply with requirements around risk management, data governance and human oversight. AI-generated content must be clearly labelled.
The Path Forward
Addressing this threat requires action on multiple fronts.
Build “Know Your Agent” Capabilities Now
Our own research makes it clear this is the defining challenge: verifying whether automated traffic is authorized or adversarial before it causes damage. This is also the area most businesses are ill-prepared for.
Shift from Point-in-Time Detection to Continuous Prevention
Point-in-time checks fail when AI agents maintain persistent sessions and adapt continuously. To prevent the malicious traffic, it’s not enough to block it and let adversaries attempt again at next to no cost. The goal is to undermine attacker ROI — introducing dynamic friction that makes it economically unsustainable to persist. This is the approach Arkose Labs has pioneered with its unified fraud deterrence Arkose Titan platform, trusted by some of the biggest brands in the world.
Cover the Basics
The Snowflake breach happened because customers didn’t enforce MFA. The AustralianSuper funds got hit because they made MFA optional. These are only a couple of examples of institutions not perfecting the basics of cyber security. Agentic AI will be far less forgiving of these lapses.
Prepare for Regulatory Scrutiny
Document AI systems, implement human oversight, and ensure processes can withstand examination under the EU AI Act, FinCEN guidance and all other legislation yet to come into force.
The Stakes Have Never Been Higher
The fraud industry has reached an inflection point. Today’s automated attacks are already causing billions in losses. Tomorrow’s agentic systems will operate faster, smarter, and more autonomously than anything we’ve seen so far.
The enterprises that act now — building defenses designed for autonomous adversaries, not just scripted bots — will define who leads and who becomes a cautionary tale. Arkose Labs’ unified fraud prevention platform, Arkose Titan, helps companies defend against the next generation of automated threats, before they become tomorrow’s headlines. Learn more about Arkose Titan.