Review: Adversarial AI Attacks, Mitigations, and Defense Strategies
Adversarial AI Attacks, Mitigations, and Defense Strategies shows how AI systems can be attacked and how defenders can prepare. It’s essentially a walkthrough of offensive and defensive approaches to AI security.
About the author
John Sotiropoulos is the Head Of AI Security at Kainos. A co-lead of the OWASP Top 10 for LLM Applications and OWASP AI Exchange, John leads alignment with other standards organizations and national cybersecurity agencies. He is also the OWASP lead at the US AI Safety Institute Consortium.
Inside the book
The book opens with a primer on machine learning. While many executives will not build models themselves, the early chapters provide a foundation for understanding how these systems are constructed and where their weak points lie. Concepts like supervised learning, model training, and neural networks are explained in plain language before the author shifts to the security dimension. That grounding is valuable for CISOs who need to evaluate vendor claims or understand the limits of what their teams are deploying.
The next sections take a hands-on turn, walking through how to set up an environment, build simple models, and then target them with adversarial techniques. Examples include poisoning data, inserting backdoors, and tampering with model code. These scenarios are technical, but their inclusion shows how easily vulnerabilities can be introduced into the machine learning pipeline.
Where the book becomes most useful for security leaders is in its coverage of defense. The author outlines mitigation strategies for each class of attack, from anomaly detection and adversarial training to supply chain safeguards and model provenance. Later chapters move into enterprise themes such as MLSecOps, threat modeling for AI systems, and secure by design approaches. These chapters make the case that AI security cannot be bolted on later. Instead, it needs to be embedded into development and operations, with governance and testing that mirror other mature security practices.
The book also covers how generative adversarial networks can be weaponized for deepfakes and misinformation, as well as how large language models are vulnerable to prompt injection and poisoning. These discussions are timely. Many organizations are experimenting with generative AI, and CISOs will find the examples useful when explaining risks to boards and business units.
Who is it for?
Overall, Adversarial AI Attacks, Mitigations, and Defense Strategies is a serious reference for those charged with securing AI systems. It offers practical demonstrations and strategic frameworks, giving security leaders the context they need to ask questions and guide their organizations toward safer AI adoption.