How companies increase risk exposure with rushed LLM deployments
In this Help Net Security interview, Jake King, Head of Threat & Security Intelligence at Elastic, discusses companies’ exposure to new security risks and vulnerabilities as they rush to deploy LLMs.
King explains how LLMs pose significant risks to data privacy and outlines strategies for mitigating these security risks.
What are some of the primary vulnerabilities associated with LLMs that you have encountered in your research?
While many companies are jumping on the generative AI bandwagon and rushing to deploy LLMs as quickly as they can, this has increased their exposure to new risks and vulnerabilities.
The OWASP Top 10 for LLM security and safety highlights a number of discovery areas that Elastic has observed in both direct observations and security testing. These include capabilities such as prompt injection, where threat actors manipulate the LLM input to control the produced output, and sensitive data exposure. It is important to note that many vulnerabilities are associated with the usage of LLMs at this stage, and less frequently frameworks and toolchains – although this is an emerging area of concern for threat researchers.
How do LLMs pose risks to data privacy, and what specific threats should organizations be aware of?
Given their broad set of use cases, from content creation to translations to chatbots, LLMs collect high volumes of personal and corporate information. If any of this data is leaked, there could be significant breaches of privacy and security. It’s critical to understand that sensitive data exposure can range from credential exposure, document and strategy sharing, all the way to source code exposure, and beyond. Companies must approve and monitor the use of LLM technologies among their staff, as well as oversee customer usage of any LLM solutions released by their organization.
What are the most effective strategies for mitigating security risks in LLM deployments?
Continuous and frequent monitoring of systems deployed in both development and production environments remains critical to ensuring safe and secure operations. Much like many emerging technologies, the comprehensiveness of logging and monitoring of LLMs is limited. As such, each solution should be considered for risks, advantages and disadvantages.
This should be coupled with effective LLM supply chain management, where vendors are properly vetted and demonstrate strong security hygiene. Standardized system hardening to reduce an organization’s attack surface and LLM security best practices can also allow those looking to ship LLM technology into their production environments to maintain a low risk. In the case of prompt injection, for instance, some mitigation best practices include tuning the LLM to identify and prevent suspicious inputs or deploying mechanisms to validate and clean input prompts.
How important is governance in managing the security of LLMs, and what frameworks or guidelines would you recommend?
Strong governance is crucial to ensuring LLMs are used responsibly, fairly and safely. NIST and OWASP have released leading publications and continue to update and provide relevant context into the development, usage and integration of LLM technologies in the enterprise. These standards, while recent, represent a key resource for those looking to accelerate the secure usage of LLMs in their organization.
It is important to consider that governance and security frameworks will assist in the commercial adoption of LLM technologies, but likely not hinder adversarial groups taking advantage of mandated system controls. Controls that are required by LLM creators may be circumvented as we’ve seen in the past, and likely will continue to be circumvented.
How can industry collaboration improve the overall security of LLMs?
Transparency and knowledge sharing are key for enhancing industry collaboration on LLM security. Organizations and researchers should lead with openness around research and findings to ensure we can raise all ships together. Given the rapid development of LLM technology and the nature of adversarial targeting for these systems, it is imperative to release findings and research as a community rapidly and openly.