GitHub Copilot CLI gets a second-opinion feature built on cross-model review
Coding agents make decisions in sequence: a plan is drafted, implemented, then tested. Any error introduced early compounds as subsequent steps build on the same flawed …
Google study finds LLMs are embedded at every stage of abuse detection
Online platforms are running large language models at every stage of LLM content moderation, from generating training data to auditing their own systems for bias. Researchers …
IT talent looks the other way as wireless security incidents pile up
Enterprise wireless networks are supporting a growing mix of devices and applications, increasing operational demand and security exposure. The 2026 Cisco State of Wireless …
CISOs grapple with AI demands within flat budgets
Security spending continues to edge upward across large organizations, though the changes remain gradual and tightly managed. The 2026 RH-ISAC CISO Benchmark reflects a steady …
Click, wait, repeat: Digital trust erodes one login at a time
Sign-up forms that drag on, login steps that repeat, and access requests that take longer than expected have become a normal part of using digital services. These moments …
Financial groups lay out a plan to fight AI identity attacks
Generative AI tools have brought the cost of deepfake production low enough that criminals and state-sponsored actors now use them routinely against financial institutions. A …
Amazon sends AI agents into pen testing and DevOps
Amazon’s latest AI capabilities bring on-demand penetration testing through the AWS Security Agent, alongside the AWS DevOps Agent. “These agents are changing the way we …
Breaking out: Can AI agents escape their sandboxes?
Container sandboxes are part of routine AI agent testing and deployment. Agents use them to run code, edit files, and interact with system resources without direct access to …
AI frenzy feeds credential chaos, secrets leak through code, tools, and infrastructure
Code keeps moving through pipelines, and credentials continue to surface alongside it. GitGuardian’s State of Secrets Sprawl 2026 puts the count at 28.65 million new hardcoded …
Make OpenAI’s models misbehave and earn a reward
OpenAI’s public Safety Bug Bounty program focuses on AI abuse and safety risks across its products. The goal is to support safe and secure systems and reduce the risk of …
GitHub jumps on the bandwagon and will use your data to train AI
GitHub updated how it uses data to improve AI-powered coding assistance. Starting April 24, interaction data from Copilot Free, Pro, and Pro+ users may be used to train and …
AI SOC vendors are selling a future that production deployments haven’t reached yet
Vendors selling AI-powered security operations platforms have built their pitches around a consistent set of promises: autonomous threat investigation, dramatic reductions in …
Featured news
Resources
Don't miss
- With AI’s help, North Korean hackers stumbled into a near-undetectable attack
- Apple fixes iPhone bug that let FBI retrieve deleted Signal messages(CVE-2026-28950)
- GopherWhisper APT group hides command and control traffic in Slack and Discord
- A year in, Zoom’s CISO reflects on balancing security and business
- Scenario: Open-source framework for automated AI app red-teaming