DataDome platform enhancements put businesses in control of AI agents

DataDome announced major advancements to its platform and partner ecosystem that put businesses back in control of how AI agents access and interact with their digital assets.

These innovations come at a pivotal moment, as enterprises grapple with the rapid rise of LLMs, AI crawlers, and automated agents that are reshaping the internet economy. With expanded intent-based AI models, LLM detection, and new AI agent response policies, DataDome’s AI engine enables the identification, categorization, adaptation, and response to traffic in less than 2 milliseconds.

With the increased adoption of LLMs and agentic AI, legitimate users and fraudsters alike are leveraging AI tools. Traditional fraud prevention methods that focus solely on identity and Turing tests, blocking bots and all other automated traffic are no longer sufficient and run the risk of blocking valid traffic or letting fraudsters through.

DataDome’s AI engine is built to detect intent, not just identity. The latest enhancements provide customers with deeper control over user intent, enabling them to distinguish between legitimate AI-driven use and malicious automation.

LLM detection with policy intelligence

DataDome now automatically groups all LLM crawlers and AI agents into a dedicated category, offering visibility into which models are accessing digital assets, how often, and for what purpose.

This visibility is paired with intelligent policy recommendations, helping security teams quickly respond based on bot identity, behavior, and trustworthiness.

New AI models enhance multi-layered AI detection engine

DataDome’s AI-first architecture is purpose-built to adapt to ever-changing threats; all models auto-adapt and scale non-linearly, drawing on massive, continuous data streams of signals and feedback loops to trigger real-time updates. Now, DataDome has created new AI models to strengthen that multi-layered foundation to better detect malicious intent.

One such newly deployed model—built to detect sudden traffic spikes from unique user agents, IP-based network identifiers, and header patterns—has already proven effective, blocking over 1.2 million malicious requests in the last 48 hours. It complements existing AI models that mitigate large-scale distributed attacks in real time.

The platform runs hundreds of foundational AI models, as well as over 85,000 customer-specific and use-case-specific AI models tailored to unique traffic patterns, intent-based behavioral analysis, and threat profiles for endpoints like login, password reset, add-to-cart, and payment flows. For customers facing the most sophisticated threats, custom models are built from scratch or adapted from base models to defend specific web, app, and API environments.

Built on AI from the start

“AI isn’t a feature—it’s the foundation of everything we do,” said Benjamin Fabre, CEO of DataDome. “It’s what powers our ability to stop bots and fraud in real time, with precision and scale. But more than that, it’s what gives our customers the visibility and control they need to stay ahead as AI agents reshape the internet economy.”

Empowering control over AI agents using KYA Identity and Payments

DataDome is putting the control of who–or what–can access a site, app, or API back in the hands of businesses with a first of its kind partner program. The initial partner in this program is Skyfire. Skyfire’s platform, enabled seamlessly through DataDome security, empowers businesses to verify the identity of AI agent traffic – and then block, or allow and monetize, turning AI agent traffic into a revenue stream – on their terms.

“AI agents are fast becoming the internet’s most active users, and they need infrastructure that moves as quickly as they do. DataDome’s partnership is a key step forward—it gives businesses the control to identify, manage, and monetize AI agent access in real time,” said Amir Sarhangi, CEO and co-founder of Skyfire. “At Skyfire, we complement that by powering the payments and KYA (Know Your Agent) identity layers, enabling website owners to identify an agent, permission access, and allow them to buy and sell data, services, and goods instantly, without human involvement.”

With these controls, organizations can enforce LLM access licensing agreements, enable authenticated access for AI agents, or even allow AI agents to remit or accept payments.

More about

Don't miss