Skip to content
Arkose Titan

Protect every account with adaptive defenses against bots, fake users, phishing, scraping, and account takeovers.

Arkose Bot Manager
Arkose Edge
Solutions
By Use Case

Defend your platform from account takeovers, fake signups, API exploits, SMS fraud, and evolving attack techniques.

By Industry

Arkose Labs tailors account protection for banking, fintech, gaming, retail, travel, and other digital industries.

Why Arkose

Arkose Labs protects account integrity and builds customer trust with adaptive defenses and proven enterprise results.

Resources

Access Arkose resources—reports, case studies, webinars, tools, and expert insights to protect accounts and platforms.

Company

Learn about Arkose Labs—our leadership, partners, careers, and mission to secure digital experiences.

  • About us
  • Leadership
  • Careers

Searching...

No results found for ""

Try different keywords or check spelling

“It’s Not a Replay Attack. It’s a Reasoning Attack.” – Paul Rockwell

Titan-Talks-blog-post-hero-image

Trust and safety leader Paul Rockwell on why the bot-vs-human binary is collapsing — and what security teams need to build before it does.

Agentic AI is breaking the foundational assumption of bot detection: that automation is the signal worth flagging. Paul Rockwell has spent his career on the front lines of trust and safety at some of the world’s largest consumer platforms, including LinkedIn and Pinterest, and he now advises organizations navigating the security implications of agentic AI. We spoke with him about why the human-versus-bot binary is collapsing, what most security teams are getting wrong, and what to build before the window closes. The conversation has been edited for clarity.

The traditional model of trust and safety is built around detecting bots. How does agentic AI change that?

The core shift is that the human-versus-bot binary, which has anchored trust and safety for about 20 years, is becoming irrelevant. When a legitimate AI agent books a flight, negotiates a price or files a support ticket on behalf of a consumer, it looks exactly like automation. It looks like the thing we’ve spent our careers building systems to block.

So the playbook has to move from “detect and block all non-human traffic” to “understand the intent and authorization of this automated traffic.” Who authorized the agent? What’s its scope? Is it operating within the boundaries the consumer set? That’s a fundamentally different problem.

Your detection stack needs an identity layer for agents, not just consumers. And your policies need to define what agents are allowed to do, not just what humans aren’t.

What threats are you watching that aren’t getting enough attention right now?

Three things stand out.

The first is agent-to-agent interaction. We’re heading into a world where my AI agent negotiates with your AI agent. There’s no human in the loop on either side. The abuse patterns there are completely uncharted. What does fraud look like when both sides are machines? We don’t have frameworks for that yet.

The second is the collapse of the cost curve for sophisticated attacks. Things that used to require well-funded operations, like crafting a convincing persona, generating realistic documents and running multi-step social engineering schemes, are now accessible to anyone with an API key. This is happening now. It democratizes the attacker side in a way most defenses aren’t priced to handle.

The third is model poisoning and prompt injection as an attack surface for detection tooling itself. Most trust and safety tools are running LLMs. Adversaries are going to target those models. Your defenses are becoming the attack surface, and that precursive risk is deeply underappreciated.

A lot of security teams feel like they’ve invested heavily in fraud prevention. Are those investments still relevant?

The misconception is that existing defenses were built for a world where attacks are repetitive. Bots run the same script a million times from the same infrastructure. You pattern match and you win. This is the model we’ve all spent 20 years building against.

Agentic AI breaks that model because each attack can now be unique. An agent can read your error messages and adapt in real time. It tries different paths and different approaches, all within a single session. It’s not a replay attack. It’s a reasoning attack. The rules engine sees it once, and that’s not enough to establish a pattern.

Multi-step, multi-platform attacks also become trivial to orchestrate. An agent can create an account on one platform, build credibility over days, extract data and deploy it on a second platform, all autonomously. No fraud tool watching a single interaction on a single platform is going to catch that.

The problem isn’t that organizations haven’t invested. It’s that the shape of the problem is changing underneath them despite the investment.

What’s your advice for a CISO or trust and safety executive planning the next 12 months?

Build your agent identity framework now. Before you need it.

Every organization I advise is in one of two modes: deploying agentic AI internally and asking how to govern it, or seeing agentic traffic hit their platform and asking how to distinguish legitimate from malicious. Some are dealing with both.

The connective tissue across all of it is identity. You need to know which agents are operating in your environment, who authorized them, what permissions they hold and what they’ve done. That’s your audit trail, your abuse detection foundation and your liability framework, all in one.

If you wait until agentic traffic is 20, 30 or 50% of your interactions to start building that, you’re behind. Organizations that get this right in the next 12 months will have a structural advantage, not just in security, but in their ability to support the legitimate agentic use cases their customers are going to start demanding.

Don’t just defend against agentic AI. Build the infrastructure to trust it where trust is deserved.