Please turn on your JavaScript for this page to function normally.
Microsoft
Microsoft: “Hack” this LLM-powered service and get paid

Microsoft, in collaboration with the Institute of Science and Technology Australia and ETH Zurich, has announced the LLMail-Inject Challenge, a competition to test and improve …

AI
Assessing AI risks before implementation

In this Help Net Security video, Frank Kim, SANS Institute Fellow, explains why more enterprises must consider many challenges before implementing advanced technology in their …

large language models
Could APIs be the undoing of AI?

Application programming interfaces (APIs) are essential to how generative AI (GenAI) functions with agents (e.g., calling upon them for data). But the combination of API and …

large language models
AI cybersecurity needs to be as multi-layered as the system it’s protecting

Cybercriminals are beginning to take advantage of the new malicious options that large language models (LLMs) offer them. LLMs make it possible to upload documents with hidden …

AI
ITSM concerns when integrating new AI services

Let’s talk about a couple of recent horror stories. Late last year, a Chevrolet dealership deployed a chatbot powered by a large language model (LLM) on their homepage. This …

Jake King
How companies increase risk exposure with rushed LLM deployments

In this Help Net Security interview, Jake King, Head of Threat & Security Intelligence at Elastic, discusses companies’ exposure to new security risks and …

Monocle
Monocle: Open-source LLM for binary analysis search

Monocle is open-source tooling backed by a large language model (LLM) for performing natural language searches against compiled target binaries. Monocle can be provided with a …

AI
Immediate AI risks and tomorrow’s dangers

“At the most basic level, AI has given malicious attackers superpowers,” Mackenzie Jackson, developer and security advocate at GitGuardian, told the audience last …

Meta
Meta plans to prevent disinformation and AI-generated content from influencing voters

Meta, the company that owns some of the biggest social networks in use today, has explained how it means to tackle disinformation related to the upcoming EU Parliament …

AI
How are state-sponsored threat actors leveraging AI?

Microsoft and OpenAI have identified attempts by various state-affiliated threat actors to use large language models (LLMs) to enhance their cyber operations. Threat actors …

email
Protecting against AI-enhanced email threats

Generative AI based on large language models (LLMs) has become a valuable tool for individuals and businesses, but also cybercriminals. Its ability to process large amounts of …

Don't miss

Cybersecurity news