The risks of autonomous AI in machine-to-machine interactions
In this Help Net Security, Oded Hareven, CEO of Akeyless Security, discusses how enterprises should adapt their cybersecurity strategies to address the growing need for machine-to-machine (M2M) security. According to Hareven, machine identities must be secured and governed similarly to human identities, focusing on automation and policy-as-code.
How should enterprises reframe their cybersecurity strategies to account for machine-to-machine interactions?
Enterprises need to recognize that machine-to-machine interactions have fundamentally different identity requirements than human-to-system interactions. Traditional identity frameworks prioritize human authentication factors such as usernames, passwords, and multi-factor authentication (MFA). However, machines require a different approach, focusing on ownership, credentials (such as certificates, keys, and secrets), automation, and policy-as-code.
To reframe cybersecurity strategies for M2M interactions, organizations should:
1. Adopt a machine identity management strategy – Just as human identities require proofing and authentication, machines need secure credentials, automated discovery, and lifecycle management of keys and certificates.
2. Shift from user experience to developer experience – Machines interact programmatically, so policies and security measures should align with DevOps and DevSecOps workflows, ensuring security is embedded in automation processes.
3. Enforce policy-as-code – Machines rely on predefined security policies rather than user-driven permissions. Organizations should implement automated, low-code/no-code policy enforcement to govern machine interactions efficiently.
4. Prioritize automation and secure orchestration – Machine identities must be managed at scale, requiring automated provisioning, credential rotation, and revocation to prevent security gaps.
By treating machine identities with the same level of security and governance as human identities—but adapting strategies to their unique needs—organizations can mitigate risks and enable secure M2M communication.
How do adversarial AI attacks, such as model poisoning or data manipulation, impact M2M security?
Adversarial AI attacks, such as model poisoning and data manipulation, threaten M2M security by compromising automated authentication and processes. These attacks exploit vulnerabilities in how machine learning models exchange data and authenticate within M2M environments.
Model poisoning involves injecting malicious data or manipulating updates, undermining AI decision-making and potentially introducing backdoors. If AI systems accept compromised credentials or updates, security degrades, particularly in autonomous M2M systems, leading to cascading failures.
Data manipulation alters datasets, either by modifying stored data or intercepting data in transit. Without proper authentication, attackers can inject false data, disrupting operations in critical M2M environments like industrial automation, IoT, and cloud workloads.
To mitigate these risks, enterprises should:
- Enforce cryptographic integrity through signed updates and model authenticity verification.
- Secure model credentials (certificates, API keys, machine authentication tokens) to restrict unauthorized access.
- Continuously monitor for anomalies in model behavior, authentication attempts, and certificate usage.
- Apply zero trust principles, requiring authentication for all AI-driven M2M interactions, even internal ones.
When you say machines, are we referring to IoT devices and industrial systems or is it broader than that. Where are the risks of compromised secrets and machine identities more prevalent? Why?
When we talk about machine identities, we’re referring to a much broader scope than just IoT and industrial systems. Machines include both workloads and devices, as defined by Gartner. Workloads cover containers, virtual machines (VMs), applications, services, robotic process automation (RPA), and scripts, while devices include mobile, desktop, and IoT/OT systems.
The biggest security risks emerge in environments where machine-to-machine interactions happen at scale, particularly in cloud, DevOps, and automation-heavy infrastructures. Some of the most vulnerable areas include:
- Cloud and Kubernetes environments – Secrets like API keys, SSH certificates, and credentials are essential for securing workloads, but if mismanaged, they can lead to massive breaches.
- API gateways and automated services – API keys are frequently targeted, and when compromised, they can give attackers direct access to critical systems.
- DevOps pipelines – With automation driving infrastructure deployments, secrets embedded in CI/CD workflows are prime targets for attackers, increasing the risk of supply chain attacks.
- Databases and virtual machines – Exposed database credentials or SSH keys can lead to data exfiltration or ransomware attacks.
- IoT and OT systems – These are especially vulnerable due to weak security configurations and long-lived credentials that are rarely updated.
The fundamental challenge is that machines don’t authenticate like humans—they rely entirely on secrets. Without proper management, attackers can exploit exposed credentials to move laterally, access sensitive data, or disrupt critical operations. Since automation allows these attacks to scale rapidly, enterprises need centralized secret management, automated rotation, and strict access controls to protect machine identities effectively.
How do we address the security risks of autonomous decision-making in AI-driven M2M systems?
The key is implementing zero standing privileges (ZSP) to prevent AI-driven systems from having persistent, unnecessary access to sensitive resources. Instead of long-lived credentials, access is granted just-in-time (JIT) with just-enough privileges, based on real-time verification.
ZSP minimizes risk by enforcing ephemeral credentials, policy-based access control, continuous authorization, and automated revocation if anomalies are detected. This ensures that even if an AI system is compromised, attackers can’t exploit standing privileges to move laterally.
With AI making autonomous decisions, security must be dynamic. By eliminating unnecessary privileges and enforcing strict, real-time access controls, organizations can secure AI-driven machine-to-machine interactions while maintaining agility and automation.
If you could advise CISOs and security teams on crucial steps to take today to prepare for M2M security challenges, what would they be?
The rapid rise of machine identities—which now outnumber human identities by 45 to 1—has made traditional, human-centric security approaches ineffective. To address M2M security challenges, CISOs and security teams should focus on three key areas:
1. Understand the differences between human and machine identities
Machines—such as containers, VMs, APIs, databases, and automated processes—authenticate differently than humans. Instead of usernames and passwords, they rely on secrets, certificates, and keys, making them prime targets for attackers. Security teams must shift their identity strategies accordingly.
2. Eliminate silos and adopt a unified platform
Current security tools for secrets, certificates, and identity governance are often disconnected and complex, driving up costs and increasing security gaps. Instead, organizations should consolidate into a single, cloud-native platform that manages all non-human identities and secrets centrally. This reduces misconfigurations, improves visibility, and enhances security.
3. Evolve organizational structure to own non-human identity management (NHIM)
M2M security isn’t just a technology problem—it requires a cross-functional approach across cloud security, DevOps, IAM, and risk management. CISOs should establish a dedicated NHIM program to ensure ongoing governance, enforcement, and adaptation to evolving machine identity threats.