The most urgent security risks for GenAI users are all data-related

Regulated data (data that organizations have a legal duty to protect) makes up more than a third of the sensitive data being shared with GenAI applications—presenting a potential risk to businesses of costly data breaches, according to Netskope.

GenAI data security risks

The new Netskope Threat Labs research reveals three-quarters of businesses surveyed now completely block at least one GenAI app, which reflects the desire by enterprise technology leaders to limit the risk of sensitive data exfiltration.

However, with fewer than half of organizations applying data-centric controls to prevent sensitive information from being shared in input inquiries, most are behind in adopting the advanced data loss prevention (DLP) solutions needed to safely enable GenAI.

Enterprises embrace GenAI

Using global data sets, the researchers found that 96% of businesses are now using GenAI, a number that has tripled over the past 12 months. On average, enterprises now use nearly 10 GenAI apps, up from three last year, with the top 1% adopters now using an average of 80 apps, up significantly from 14.

With the increased use, enterprises have experienced a surge in proprietary source code sharing within GenAI apps, accounting for 46% of all documented data policy violations. These shifting dynamics complicate how enterprises control risk, prompting the need for a more robust DLP effort.

There are positive signs of proactive risk management in the nuance of security and data loss controls organizations are applying: for example, 65% of enterprises now implement real-time user coaching to help guide user interactions with GenAI apps. According to the research, effective user coaching has played a crucial role in mitigating data risks, prompting 57% of users to alter their actions after receiving coaching alerts.

“Securing GenAI needs further investment and greater attention as its use permeates through enterprises with no signs that it will slow down soon,” said James Robinson, CISO, Netskope. “Enterprises must recognize that genAI outputs can inadvertently expose sensitive information, propagate misinformation or even introduce malicious content. It demands a robust risk management approach to safeguard data, reputation, and business continuity.”

It’s clear that GenAI will be the driver of AI investment in the short-term and will introduce the broadest risk and impact to enterprise users. It is or will be bundled by default on major application, search, and device platforms, with use cases such as search, copy-editing, style/tone adjustment, and content creation.

The primary risk stems from the data users send to the apps, including data loss, unintentional sharing of confidential information, and inappropriate use of information (legal rights) from GenAI services. Currently, text (LLMs) are used more, with their broader use cases, although genAI apps for video, images, and other media are also a factor.

Data risks dominate GenAI security concerns

ChatGPT retains its dominance in popularity, with 80% of organizations using it, while Microsoft Copilot, which became generally available in January 2024, is third with 57% of organizations using it. Grammarly and Google Gemini (formerly Bard) retain high rankings.

Not only are organizations using more GenAI apps, but the amount of user activity with those apps is also increasing. While the overall percentage of users using GenAI apps is still relatively low, the rate of increase is significant, going from 1.7% in June 2023 to over 5% in June 2024, nearly tripling in 12 months for the average organization.

Data is still the most critical asset to protect when GenAI apps are in use. Users are still the key actors in causing and preventing data risk and today, the most urgent security risks for GenAI users are all data-related.

Unsurprisingly, DLP controls are growing in popularity as a GenAI data risk control. Data loss prevention has increased in popularity from 24% in June 2023 to over 42% of organizations using DLP to control the types of data sent to GenAI apps in June 2024, more than 75% year-over-year growth.

The increase in DLP controls reflects an understanding across organizations about how to effectively mitigate data risk amidst the larger, broad trend of increasing GenAI app use.

GenAI, with its ability to autonomously generate content, poses unique challenges. Enterprises must recognize that GenAI-generated outputs can inadvertently expose sensitive information, propagate misinformation, or even introduce malicious content. As such, it becomes crucial to assess and mitigate these risks comprehensively.

The focus should be on data risk from GenAI app usage, as data is at the core of GenAI systems.

Don't miss