Akto provides security assessments for GenAI models
About 77% of organizations have adopted or are exploring AI in some capacity, pushing for a more efficient and automated workflow.
With the increasing reliance on GenAI models and LLMs like ChatGPT, the need for robust security measures has become paramount, prompting Akto to launch its GenAI Security Testing solution.
“Akto has a new capability to scan APIs that leverage AI technology and this is fundamental for the future of application security. I invested early in building application, security education for AI and I’m thrilled to see other security companies do the same for security assessment around AI technologies,” said Jim Manico, Former OWASP Global Board Member, Secure Coding Educator.
On average, an organization uses 10 GenAI models. Often most LLMs in production will receive data indirectly via APIs. That means tons and tons of sensitive data is being processed by the LLM APIs. Ensuring the security of these APIs will be very crucial to protect user privacy and prevent data leaks. There are several ways in which LLMs can be abused today, leading to sensitive data leaks:
- Prompt injection vulnerabilities – The risk of unauthorized prompt injections, where malicious inputs can manipulate the LLM’s output, has become a major concern.
- Denial of Service (DoS) threats – LLMs are also susceptible to DoS attacks, where the system is overloaded with requests, leading to service disruptions. There’s been a rise in reported DoS incidents targeting LLM APIs in the last year.
- Overreliance on LLM outputs – Overreliance on LLMs without adequate verification mechanisms has led to cases of data inaccuracies and leaks. Organizations are encouraged to implement robust validation processes, as the industry sees an increase in data leak incidents due to overreliance on LLMs.
“Securing GenAI systems requires a multifaceted approach with the need to protect not only the AI from external inputs but also external systems that depend on their outputs,” said OWASP Top 10 for LLM AI Applications Core team member.
On March 20, 2023, there was an outage with OpenAI’s AI tool, ChatGPT. The outage was caused by a vulnerability in an open-source library, which may have exposed payment-related information of some customers. Very recently, on January 25, 2024, a critical vulnerability was discovered in Anything LLM ( 8,000 Github Stars) that turns any document or piece of content into context that any LLM can use during chatting.
An unauthenticated API route (file export) can allow attackers to crash the server resulting in a denial of service attack. These are only a few examples of security incidents related to using LLM models.
Akto’s GenAI Security Testing solution addresses these challenges head-on. By leveraging advanced testing methodologies and algorithms, Akto provides comprehensive security assessments for GenAI models, including LLMs.
The solution incorporates a wide range of innovative features, including over 60 meticulously designed test cases that cover various aspects of GenAI vulnerabilities such as prompt injection, overreliance on specific data sources, and more. These test cases have been developed by Akto’s team of experts in GenAI security, ensuring the highest level of protection for organizations deploying GenAI models.
Currently, security teams manually test all the LLM APIs for flaws before release. Due to the time sensitivity of product releases, teams can only test for a few vulnerabilities. As hackers continue to find more creative ways to exploit LLMs, security teams need to find an automated way to secure LLMs at scale.
“Often input to an LLM comes from an end-user or the output is shown to the end-user or both. The tests try to exploit LLM vulnerabilities through different encoding methods, separators and markers. This specially detects weak security practices where developers encode the input or put special markers around the input.” said Ankush Jain, CTO at Akto.io.
AI security testing identifies vulnerabilities in the security measures for sanitizing output of LLMs. It aims to detect attempts to inject malicious code for remote execution, cross-site scripting (XSS), and other attacks that could allow attackers to extract session tokens and system information. In addition, Akto also tests whether the LLMs are susceptible to generating false or irrelevant reports.
“From Prompt Injection ( LLM:01) to Overreliance (LLM09) and new vulnerabilities and breaches everyday and build systems that are secure by default; It is critical to test systems early for these ever evolving threats. I’m excited to see what Akto has in store for my LLM projects,” said OWASP Top 10 for LLM AI Applications Core team member.
To further emphasize the importance of GenAI security, a recent survey in September, 2023 by Gartner revealed that 34% of organizations are either already using or implementing AI application security tools to mitigate the accompanying risks of GenAI. 56% of respondents said they are also exploring such solutions, highlighting the critical need for robust security testing solutions like Akto’s.
As organizations strive to harness the power of AI, Akto stands at the forefront of ensuring the security and integrity of these transformative technologies. The launch of their GenAI Security Testing solution reinforces their commitment to innovation and their dedication to enabling organizations to embrace GenAI with confidence.