OpenGuardrails: A new open-source model aims to make AI safer for real-world use
When you ask a large language model to summarize a policy or write code, you probably assume it will behave safely. But what happens when someone tries to trick it into …
Google uncovers malware using LLMs to operate and evade detection
PromptLock, the AI-powered proof-of-concept ransomware developed by researchers at NYU Tandon and initially mistaken for an active threat by ESET, is no longer an isolated …
PortGPT: How researchers taught an AI to backport security patches automatically
Keeping older software versions secure often means backporting patches from newer releases. It is a routine but tedious job, especially for large open-source projects such as …
Shadow AI: New ideas emerge to tackle an old problem in new form
Shadow AI is the second-most prevalent form of shadow IT in corporate environments, 1Password’s latest annual report has revealed. Based on a survey of over 5,000 …
AI chatbots are sliding toward a privacy crisis
AI chat tools are taking over offices, but at what cost to privacy? People often feel anonymous in chat interfaces and may share personal data without realizing the risks. …
Faster LLM tool routing comes with new security considerations
Large language models depend on outside tools to perform real-world tasks, but connecting them to those tools often slows them down or causes failures. A new study from the …
Companies want the benefits of AI without the cyber blowback
51% of European IT and cybersecurity professionals said they expect AI-driven cyber threats and deepfakes to keep them up at night in 2026, according to ISACA. AI takes centre …
AI’s split personality: Solving crimes while helping conceal them
What happens when investigators and cybercriminals start using the same technology? AI is now doing both, helping law enforcement trace attacks while also being tested for its …
Most AI privacy research looks the wrong way
Most research on LLM privacy has focused on the wrong problem, according to a new paper by researchers from Carnegie Mellon University and Northeastern University. The authors …
When trusted AI connections turn hostile
Researchers have revealed a new security blind spot in how LLM applications connect to external systems. Their study shows that malicious Model Context Protocol (MCP) servers …
GPT needs to be rewired for security
LLMs and agentic systems already shine at everyday productivity, including transcribing and summarizing meetings, extracting action items, prioritizing critical emails, and …
A2AS framework targets prompt injection and agentic AI security risks
AI systems are now deeply embedded in business operations, and this introduces new security risks that traditional controls are not built to handle. The newly released A2AS …
Featured news
Resources
Don't miss
- Henkel CISO on the messy truth of monitoring factories built across decades
- The hidden dynamics shaping who produces influential cybersecurity research
- UTMStack: Open-source unified threat management platform
- LLMs are everywhere in your stack and every layer brings new risk
- Building SOX compliance through smarter training and stronger password practices