LLM privacy policies keep getting longer, denser, and nearly impossible to decode
People expect privacy policies to explain what happens to their data. What users get instead is a growing wall of text that feels harder to read each year. In a new study, …
LLM vulnerability patching skills remain limited
Security teams are wondering whether LLMs can help speed up patching. A new study tests that idea and shows where the tools hold up and where they fall short. The researchers …
LLMs are everywhere in your stack and every layer brings new risk
LLMs are moving deeper into enterprise products and workflows, and that shift is creating new pressure on security leaders. A new guide from DryRun Security outlines how these …
AI agents break rules in unexpected ways
AI agents are starting to take on tasks that used to be handled by people. These systems plan steps, call tools, and carry out actions without a person approving every move. …
NVIDIA research shows how agentic AI fails under attack
Enterprises are rushing to deploy agentic systems that plan, use tools, and make decisions with less human guidance than earlier AI models. This new class of systems also …
AI vs. you: Who’s better at permission decisions?
A single tap on a permission prompt can decide how far an app reaches into a user’s personal data. Most of these calls happen during installation. The number of prompts keeps …
Social data puts user passwords at risk in unexpected ways
Many CISOs already assume that social media creates new openings for password guessing, but new research helps show what that risk looks like in practice. The findings reveal …
Small language models step into the fight against phishing sites
Phishing sites keep rising, and security teams are searching for ways to sort suspicious pages at speed. A recent study explores whether small language models (SLMs) can scan …
DeepTeam: Open-source LLM red teaming framework
Security teams are pushing large language models into products faster than they can test them, which makes any new red teaming method worth paying attention to. DeepTeam is an …
BlueCodeAgent helps developers secure AI-generated code
When AI models generate code, they deliver power and risk at the same time for security teams. That tension is at the heart of the new tool called BlueCodeAgent, designed to …
The long conversations that reveal how scammers work
Online scammers often take weeks to build trust before making a move, which makes their work hard to study. A research team from UC San Diego built a system that does the …
Metis: Open-source, AI-driven tool for deep security code review
Metis is an open source tool that uses AI to help engineers run deep security reviews on code. Arm’s product security team built Metis to spot subtle flaws that are often …
Featured news
Resources
Don't miss
- Sensitive data of Eurail, Interrail travelers compromised in data breach
- PoC exploit for critical FortiSIEM vulnerability released (CVE-2025-64155)
- Microsoft shuts down RedVDS cybercrime subscription service tied to millions in fraud losses
- LinkedIn wants to make verification a portable trust signal
- QR codes are getting colorful, fancy, and dangerous