AI might be the answer for better phishing resilience
Phishing is still a go-to tactic for attackers, which is why even small gains in user training are worth noticing. A recent research project from the University of Bari looked …
LLM privacy policies keep getting longer, denser, and nearly impossible to decode
People expect privacy policies to explain what happens to their data. What users get instead is a growing wall of text that feels harder to read each year. In a new study, …
LLM vulnerability patching skills remain limited
Security teams are wondering whether LLMs can help speed up patching. A new study tests that idea and shows where the tools hold up and where they fall short. The researchers …
LLMs are everywhere in your stack and every layer brings new risk
LLMs are moving deeper into enterprise products and workflows, and that shift is creating new pressure on security leaders. A new guide from DryRun Security outlines how these …
AI agents break rules in unexpected ways
AI agents are starting to take on tasks that used to be handled by people. These systems plan steps, call tools, and carry out actions without a person approving every move. …
NVIDIA research shows how agentic AI fails under attack
Enterprises are rushing to deploy agentic systems that plan, use tools, and make decisions with less human guidance than earlier AI models. This new class of systems also …
AI vs. you: Who’s better at permission decisions?
A single tap on a permission prompt can decide how far an app reaches into a user’s personal data. Most of these calls happen during installation. The number of prompts keeps …
Social data puts user passwords at risk in unexpected ways
Many CISOs already assume that social media creates new openings for password guessing, but new research helps show what that risk looks like in practice. The findings reveal …
Small language models step into the fight against phishing sites
Phishing sites keep rising, and security teams are searching for ways to sort suspicious pages at speed. A recent study explores whether small language models (SLMs) can scan …
DeepTeam: Open-source LLM red teaming framework
Security teams are pushing large language models into products faster than they can test them, which makes any new red teaming method worth paying attention to. DeepTeam is an …
BlueCodeAgent helps developers secure AI-generated code
When AI models generate code, they deliver power and risk at the same time for security teams. That tension is at the heart of the new tool called BlueCodeAgent, designed to …
The long conversations that reveal how scammers work
Online scammers often take weeks to build trust before making a move, which makes their work hard to study. A research team from UC San Diego built a system that does the …
Featured news
Resources
Don't miss
- Okta users under attack: Modern phishing kits are turbocharging vishing attacks
- One-time SMS links that never expire can expose personal data for years
- More employees get AI tools, fewer rely on them at work
- Energy sector orgs targeted with AiTM phishing campaign
- Exposed training apps are showing up in active cloud attacks