How AI will shape the next generation of cyber threats

In this Help Net Security interview, Buzz Hillestad, CISO at Prismatic, discusses how AI’s advancement reshapes cybercriminal skillsets and lowers entry barriers for potential attackers.

Hillestad highlights that, as AI tools become more accessible, organizations must adapt their defenses to anticipate evolving threats.

AI-driven attacks

How might the development of AI technology impact the skillsets required for cybercriminals? Will AI lower the barrier to entry for potential attackers?

The development of AI technology will dramatically shift the skillsets required for cybercriminals, effectively lowering the barrier to entry. Traditionally, sophisticated cyberattacks required significant technical expertise — attackers needed to understand coding, malware engineering, and advanced exploitation techniques. With the rise of AI, these barriers are diminishing.

AI-powered attack tools are becoming increasingly accessible, and many will be packaged as easy-to-use products available on the dark web. This democratization means that individuals or groups who previously lacked the skills to execute complex attacks will gain access to powerful, AI-enhanced systems. These pre-built AI attack kits could allow anyone, from rogue insiders to disgruntled employees and activist groups, to launch sophisticated cyberattacks.

In essence, AI turns advanced attack strategies into point-and-click operations, removing the need for deep technical knowledge. Attackers won’t need to write custom code or conduct in-depth research to exploit vulnerabilities. Instead, AI systems will analyze target environments, find weaknesses and even adapt attack patterns in real time without requiring much input from the user.

This shift greatly widens the pool of potential attackers. Organizations that have traditionally focused on defending against nation-state actors and professional hacker groups will now have to contend with a much broader range of threats. Eventually, AI will empower individuals with limited tech knowledge to execute attacks rivaling those of today’s most advanced adversaries.

To stay ahead, defenders must match this acceleration with AI-powered defenses that can predict, detect and neutralize threats before they escalate. In this new environment, success will depend not just on reacting to attacks but on anticipating them.

Organizations will need to adopt predictive AI capabilities that can evolve alongside the rapidly shifting threat landscape, staying one step ahead of attackers who now have unprecedented power at their fingertips.

How should cybersecurity professionals distinguish between AI-generated threats and human-led attacks?

Distinguishing between AI-generated threats and human-led attacks will require cybersecurity professionals to focus on patterns, speed, and complexity. AI-generated attacks behave differently from human-led efforts, particularly in how they adapt, scale and execute over time. Below are some key strategies to help professionals differentiate between these two types of attacks:

1. Look for speed and scale beyond human capabilities

AI-generated threats operate at speeds far beyond human capability. They can launch multiple attacks simultaneously, scan networks in milliseconds and adjust tactics in real time. Human-led attacks, by contrast, tend to move more slowly, with noticeable pauses between reconnaissance, exploitation and attack phases. If an attack shows signs of instant adaptation or overwhelming volume, it’s likely AI-driven.

2. Identify complex, nonlinear attack patterns

AI-based attacks often exhibit nonlinear or unconventional patterns — combining several different attack methods in a way that appears erratic to human analysts but is part of a complex algorithm. In contrast, human attackers often follow more predictable steps, such as phishing campaigns leading to credential theft or ransomware. If the threat behaves unpredictably but remains highly effective, it is a strong indicator of AI
involvement.

3. Use AI-powered defenses to detect anomalous behaviors

AI systems are best equipped to detect AI-generated threats. These systems can identify subtle anomalies and evolving tactics that may be difficult for human analysts to spot. Behavioral analytics tools can help identify unusual activity that matches the fingerprint of an AI-driven attack, such as automated lateral movement across systems or hyper-adaptive malware.

4. Assess the target’s strategic value and intent

AI-driven attacks often target systems where real-time exploitation offers a high payoff, such as financial systems, supply chains and critical infrastructure. Human-led attacks, however, may reflect more deliberate actions, such as espionage or protest-related hacks. Understanding the intent and target selection can help professionals infer whether they are dealing with a sophisticated AI tool or a human-driven operation.

5. Monitor for misuse of commoditized AI tools

Since AI-based attack tools are increasingly bought and sold on the black market, professionals should monitor for signs of their use. A sudden rise in the effectiveness of an attack from previously unsophisticated actors could indicate that they are leveraging commoditized AI tools. Indicators such as polished phishing emails, automated credential stuffing or dynamic malware deployment may point toward AI involvement.

6. Pay attention to response and adaptation tactics

Human-led attacks often leave room for error. Attackers may react slowly to defenses or fail to adapt once detected. AI-powered threats, on the other hand, can rapidly switch tactics, pivoting to new vulnerabilities the moment defenses change. If a threat demonstrates continuous adaptation without delays, it likely indicates AI-based orchestration.

What role do regulatory bodies have in managing the risks associated with AI-driven cyberattacks? Are existing frameworks sufficient, or is there a need for new policies?

Regulatory bodies play a crucial role in managing the risks associated with AI-driven cyberattacks by establishing standards, ensuring compliance, and encouraging transparency across industries. However, as AI-based threats evolve rapidly, existing frameworks may not be sufficient to address the full scope of risks. To stay ahead, regulators will need to adapt current policies and introduce new ones that are more attuned to AI’s unique challenges.

Existing frameworks: Strengths and gaps

Current regulatory frameworks, such as the General Data Protection Regulation (GDPR) and NIST cybersecurity framework, provide a solid foundation for data protection, incident response and risk management. However, these policies were primarily designed for human-driven threats, and they may lack the agility to deal with the speed and complexity of AI-driven attacks. For instance:

  • Real-time adaptive attacks by AI are difficult to address through static compliance checklists.
  • AI accountability remains a gray area. There are no clear rules on who is responsible when an AI system goes rogue, especially when AI is being used both by defenders and attackers.

Additionally, cross-border cyberattacks and AI models trained on global data make it challenging for national regulatory bodies to manage AI risks effectively. Regulatory efforts must move toward international coordination to establish a cohesive response.

Are there ethical concerns when deploying AI for cybersecurity defense, especially in data privacy or automated response measures?

Yes, there are several ethical concerns when deploying AI for cybersecurity defense, particularly in data privacy, bias, transparency, accountability and automated responses. As AI becomes more integrated into security systems, organizations must carefully balance the need for protection with the ethical implications of their practices.

1. Data privacy and surveillance risks

AI-based cybersecurity solutions often rely on large datasets to identify patterns and detect threats. These datasets can include sensitive personal information, user behavior logs or communications data. Collecting and analyzing this information, even for defensive purposes, raises concerns about:

  • User consent: Are users aware that their data is being monitored by AI systems?
  • Overreach in data collection: AI-powered tools may collect more data than necessary, violating privacy regulations like GDPR or CCPA.
  • Data misuse: There is always a risk that the data collected for cybersecurity could be repurposed for other business or government surveillance activities.

Organizations must ensure compliance with privacy laws and adopt transparent data-handling practices. They should use data minimization techniques (only collecting the data necessary for the AI to function) and apply differential privacy where possible to protect individuals’ identities.

2. Bias in AI algorithms and decision-making

AI algorithms can inherit biases from the data they are trained on, leading to unfair or discriminatory outcomes. For example:

  • False positives: An AI system might disproportionately flag certain users, regions, or behaviors as suspicious, leading to unjustified investigations or restrictions.
  • Bias in anomaly detection: If the AI is trained primarily on historical data, it may overlook new types of attacks or disproportionately target uncommon but legitimate activities.

These biases can have serious implications for fairness and accountability, particularly if AI tools are used to restrict access, terminate sessions or impose penalties on users. Organizations need diverse training datasets and rigorous bias detection frameworks to reduce these risks.

3. Accountability and transparency challenges

One of the biggest ethical concerns with AI in cybersecurity is the “black box” nature of many AI systems. These systems can make decisions that are difficult to understand or explain, which creates issues around:

  • Accountability: Who is responsible if the AI makes a mistake or fails to detectan attack?
  • Transparency: Can the organization or user understand how the AI reached its conclusion, especially if it leads to significant consequences like blocked access or flagged transactions?

To address these challenges, organizations should deploy explainable AI (XAI) systems that make their decisions more transparent to users and auditors. Transparency is essential for maintaining trust in automated cybersecurity defenses.

4. Risks of automated response measures

AI-powered cybersecurity systems often include automated response capabilities, suchas blocking suspicious IP addresses, terminating user sessions, or quarantining files.

While automation improves response speed, it introduces new risks:

  • False positives: An automated system might block legitimate traffic, disruptcritical operations or deny services to authorized users.
  • Overreaction: Automated tools may escalate responses unnecessarily, such as taking down an entire network segment due to a suspected threat, resulting in collateral damage.
  • Lack of human oversight: In certain cases, automated systems may react too quickly, without allowing human analysts time to review the situation or override the response.

Organizations must carefully balance automation with human oversight, especially for high-stakes decisions. Hybrid models combining AI-driven automation with human review can help mitigate the risks of false positives and overreactions.

5. Ethical boundaries in offensive AI measures

While AI is primarily used for defense, some organizations may explore offensive AI tactics, such as deploying AI bots to disable malicious infrastructure or disrupt threat actors. These actions raise complex ethical and legal questions:

  • Legality: Offensive cyber operations may violate national or international laws, especially if they target infrastructure in another country.
  • Escalation risks: Offensive AI measures could provoke retaliation, escalating conflicts between organizations or nations.
  • Collateral damage: There is always the risk that offensive AI tools could inadvertently disrupt innocent users or systems.

Organizations must establish clear ethical guidelines for AI usage in both defensive and offensive contexts. Collaboration with legal teams and regulators is essential to ensure compliance with laws and avoid unintended consequences.

Conclusion: Striking an ethical balance

Deploying AI for cybersecurity defense brings both significant benefits and ethical challenges. Organizations must carefully navigate the risks related to data privacy, bias, transparency, automated responses and accountability. While automation can enhance security, it is critical to maintain human oversight to prevent errors and overreach.

By adopting explainable AI systems, reducing bias and aligning practices with privacy laws, organizations can ensure that their AI-driven defenses remain ethical and effective. As AI becomes an essential component of cybersecurity, organizations must balance innovation with responsibility, ensuring that security measures protect not only systems but also the rights and privacy of individuals.

How do AI-driven cyber threats evolve in the next five to ten years? Will there be entirely new types of attacks?

Over the next five to ten years, AI-driven cyber threats will evolve significantly, resulting in entirely new types of attacks that are faster, more adaptive, and more difficult to predict or mitigate. As AI technology advances, we can expect attackers to leverage it in increasingly creative ways, expanding the threat landscape beyond anything we currently experience. Below are some ways these threats will evolve and the novel attack methods likely to emerge.

1. Autonomous hacking systems and real-time adaptation

In the near future, we’ll see autonomous AI-powered hacking systems that can operate with minimal or no human intervention. These systems will use machine learning models to continuously adapt their tactics during an attack, switching strategies in real time based on the defenses they encounter. Unlike current attacks, which require human oversight, these AI systems will dynamically alter their approach to exploit new vulnerabilities instantly.

Example: An AI system could attempt multiple exploits simultaneously across a network. If one attack vector is blocked, the system will automatically try another without requiring input from a human operator.

2. AI-powered social engineering attacks

Social engineering attacks like phishing will become far more sophisticated, thanks to AI systems trained on vast datasets of personal information. Attackers will use AI to generate personalized phishing emails, phone calls or messages that are almost indistinguishable from legitimate communications. These attacks will manipulate individuals with highly contextual, emotionally targeted messages, increasing their success rates.

Example: Deepfake-powered phishing could involve AI-generated voice messages or video calls impersonating executives or colleagues, convincing employees to transfer funds or reveal sensitive information.

3. AI vs. AI: Counterattack systems and AI poisoning

As defenders increasingly adopt AI-driven security systems, attackers will develop techniques to target these AI models directly. One emerging threat is model poisoning, where attackers manipulate the training data or algorithms of security AI systems to make them ineffective. Additionally, we may see AI vs. AI warfare, where attackers deploy AI systems specifically designed to disrupt or outmaneuver defensive AI tools.

Example: An attacker could introduce misleading data into a company’s threat detection system, causing it to ignore real threats or trigger false positives, overwhelming security teams.

4. Weaponization of AI for physical systems and IoT devices

The rapid expansion of IoT devices and smart infrastructure will provide new targets for AI-driven attacks. In the coming years, we’ll likely see attackers using AI to manipulate physical systems, such as smart grids, transportation networks or healthcare devices, causing real-world disruptions. Attacks on connected devices will become more intricate, using AI to create self-propagating malware that spreads across networks
autonomously.

Example: An AI-powered malware strain could target connected medical devices in a hospital, disabling life-saving equipment and demanding a ransom for restoration.

5. AI-enabled “swarm” attacks

Another novel threat will come in the form of swarm attacks, where multiple AI-powered agents operate in coordination to overwhelm a system. Each agent in the swarm could perform a distinct role — some conducting reconnaissance, others launching attacks — making the attack highly effective and difficult to counter. These swarm attacks will act like digital flash mobs, overwhelming defenses through sheer volume and speed.

Example: A coordinated swarm attack could target a financial institution, with some AI agents conducting denial-of-service attacks while others simultaneously infiltrate systems to steal data.

6. Democratization of advanced attack tools

AI-based attack systems will become increasingly commoditized and available on underground marketplaces. As AI technologies become cheaper and more accessible, even unsophisticated actors will be able to launch complex attacks. This democratization of cyberattacks will widen the pool of potential attackers, from rogue insiders to politically motivated groups, creating a far more unpredictable threat landscape.

Example: A disgruntled employee could use pre-built AI attack tools purchased from the dark web to disrupt operations or steal data from their former employer.

7. Predictive ransomware and AI-augmented extortion

Ransomware attacks will also evolve with the help of AI. In the near future, attackers will use predictive analytics to identify the most valuable targets and determine the optimal time to launch an attack. AI-powered ransomware will automatically adjust ransom demands based on the victim’s financial status or insurance coverage, increasing the likelihood of payment.

Example: An AI system could analyze a company’s financial data and insurance policy before launching a ransomware attack, setting a ransom amount that the company is likely to pay quickly to avoid operational disruption.

A major shift toward API attacks

In the coming years, we will see a paradigm shift in the focus of organized cybercrime. While ransomware currently dominates, attackers will increasingly turn their attention toward API vulnerabilities. With cloud adoption continuing to accelerate, APIs have become the lifeblood of digital ecosystems, but many of them remain poorly secured.

It’s estimated that 80% of APIs are exposed to varying degrees, presenting lucrative opportunities for attackers. This shift from ransomware to API exploitation will have profound implications for cloud providers, forcing them to rethink their security strategies.

Conclusion: A future of constant evolution and escalation

In the next five to ten years, AI-driven cyber threats will become more advanced, adaptive, and difficult to counter. New attack methods — such as autonomous hacking systems, AI-powered social engineering, swarm attacks, and model poisoning — will reshape the cybersecurity landscape. The sheer speed and sophistication of these attacks will outpace traditional security measures, requiring organizations to adopt equally advanced AI-powered defenses.

This evolving threat landscape will demand predictive capabilities, cross-disciplinary expertise and constant vigilance from cybersecurity professionals. The future will not just be a battle between attackers and defenders; it will be a contest between competing AI systems, with the winners being those who can adapt the fastest. Organizations that fail to evolve alongside these new threats risk being left defenseless against an unpredictable and rapidly changing cyber battlefield.

Don't miss