How agentic AI handles the speed and volume of modern threats
In this Help Net Security interview, Lior Div, CEO at Seven AI, discusses the concept of agentic AI and its application in cybersecurity. He explains how it differs from traditional automated security systems by offering greater autonomy and decision-making capabilities.
Div emphasizes that agentic AI is particularly well-suited to combat modern AI-driven threats, such as AI-generated phishing or malware, by processing vast volumes of alerts in real time.
How would you differentiate agentic security from traditional automated cybersecurity solutions? What makes it distinct in terms of autonomy and decision-making? What gaps in conventional security approaches is this paradigm aiming to fill?
Traditional automation is typically based on predefined “if-then” rules, where the person writing the code must anticipate all possible outcomes and decisions in advance. This works in a static environment, but cybersecurity is anything but static. For example, if an email contains a URL, text, and a file attachment, automated systems can be programmed to investigate those elements. This level of automation can work well for initial analysis and enrichment.
However, the challenge comes when a deeper investigation is required. If the system identifies a malicious file, it can only follow the paths that were predefined. Traditional automation stops here, leaving the next steps to a human analyst.
Agentic AI, on the other hand, goes beyond predefined paths. If the AI detects a malicious file, it doesn’t just stop; it can dynamically initiate further actions—such as launching an endpoint detection and response (EDR) investigation—without the need for pre-scripted instructions. It has the ability to make real-time decisions based on the evolving situation, much like a human analyst would, but without the limits of static code.
I often compare it to teaching a self-driving car how to drive using “if-then” rules. In a controlled environment, you might manage to write code that works. But once the car is out on a busy street, with constantly changing variables, such a system will fail. The same applies to traditional cybersecurity automation—it’s unable to adapt to the complexity of real-world cyber threats. Agentic AI, however, can respond dynamically to unexpected situations, making it far more capable of addressing today’s complex cyber landscape.
With the rise of AI-generated attacks, such as AI-crafted phishing or malware, how can agentic security effectively counter these threats? Are there specific examples of it outperforming traditional defenses?
One of the main challenges we face today is the sheer volume of attacks. Hackers can use AI to generate countless phishing emails or malware variations, far beyond what human analysts can manually handle. Agentic AI can handle this kind of scale because it can review every alert, and every potential threat in real time, without fatigue or oversight.
Another critical factor is speed. As the volume of attacks increases, the time we have to respond decreases. AI systems, unlike human analysts, can review every alert as if it’s the most important investigation. While humans might become overwhelmed by the volume and make mistakes, AI systems remain consistent and fast, processing information much more quickly than any human could.
In traditional security approaches, we often attempt to shrink the number of alerts, focusing only on the highest priority ones. Agentic AI changes the game—it can treat every alert, every email, as if it’s significant, investigating all possibilities and doing it in a fraction of the time. This level of disproportionate power means we can thoroughly investigate potential threats without compromise, leading to much more comprehensive security coverage.
How does agentic security handle the scale and speed of modern cyber threats? What role does machine speed play in its ability to manage vast volumes of alerts?
Agentic AI excels in parallel processing—it can handle multiple alerts simultaneously, analyzing and investigating each one in depth. But there’s another layer to this: contextual awareness. AI doesn’t just brute force its way through alerts. Over time, it learns the nuances of the specific environment it’s protecting, understanding the unique context of the organization.
For example, if the AI encounters an IP address that was previously flagged in an internal database as part of routine network scans, it can correlate that information and dismiss the alert as benign. A human analyst would struggle to remember such details across large volumes of alerts. AI, however, can handle this context effortlessly, allowing it to focus on real threats.
This ability to correlate information, learn from past data, and adapt to the specific environment gives agentic AI a significant advantage when managing vast alert volumes. Additionally, AI doesn’t forget and can apply its learnings elsewhere. By handling one situation on one customer, its learning can be applied to every other customer.
Can you share a real-world case where agentic security significantly reduced the time to respond to a complex cyber threat? What lessons were learned from that instance?
We’re already seeing agentic AI outperform human analysts in terms of speed and thoroughness. Every investigation done by the system is significantly faster—by orders of magnitude—compared to a human analyst. It’s not just about speed, though; it’s also about the depth of the investigation. Agentic AI can follow leads and analyze data far more thoroughly than manual methods allow.
One of the key takeaways is that while people are still hesitant to trust fully autonomous systems for critical responses, agentic AI can provide a step-by-step explanation of what it’s doing. This allows human analysts to review and approve actions before they’re taken, creating a hybrid model where AI does the heavy lifting and humans retain oversight.
As agentic AI gains more autonomy, what ethical considerations should be taken into account, especially regarding decisions made without human oversight?
The critical point here is that agentic AI systems in cybersecurity are not general-purpose AIs. These aren’t systems like Skynet from science fiction—they are highly specialized agents designed to perform specific tasks within a cybersecurity context. By design, they can’t make decisions outside of the scope of their role.
In fact, agentic AI can actually enhance privacy protections because it doesn’t require a human to review sensitive data, like browsing history or email content. The AI focuses only on determining whether something poses a threat, without concern for the personal information involved. In some ways, this can lead to greater privacy compared to traditional methods, where analysts might unintentionally access personal data.
Another important consideration is transparency. Our agentic AI systems are not a black box—they provide a clear audit trail, showing what tools were used, how decisions were made, and what actions were taken. This level of auditability ensures that humans can review the system’s actions, understand its decisions, and retain control.
As AI continues to advance, the potential for agentic AI to revolutionize cybersecurity is immense. The combination of speed, scale, and contextual understanding makes it far superior to traditional automation. But while the technology is powerful, we’re committed to ensuring it operates with full transparency and ethical oversight, so organizations can trust the systems they rely on to protect them.