GenAI turning employees into unintentional insider threats

The amount of data being shared by businesses with GenAI apps has exploded, increasing 30x in one year, according to Netskope. The average organization now shares more than 7.7GB of data with AI tools per month, a massive jump from just 250MB a year ago.

GenAI apps risks

This includes sensitive data such as source code, regulated data, passwords and keys, and intellectual property, significantly increasing the risk of costly breaches, compliance violations, and intellectual property theft. 75% of enterprise users are accessing applications with GenAI features, creating a bigger issue security teams must address: the unintentional insider threat.

GenAI apps present a growing cybersecurity risk

90% of organizations have users directly accessing GenAI apps like ChatGPT, Google Gemini, and GitHub Copilot. 98% of organizations have users accessing apps that provide GenAI-powered features, like Gladly, Insider, Lattice, LinkedIn, and Moveworks.

GenAI adoption is rising in the enterprise according to many different measures. Still, none are as important from a data security perspective as the amount of data sent to GenAI apps: Every post or upload is an opportunity for data exposure.

“Despite earnest efforts by organizations to implement company-managed GenAI tools, our research shows that shadow IT has turned into shadow AI, with nearly three-quarters of users still accessing GenAI apps through personal accounts,” said James Robinson, CISO, Netskope. “This ongoing trend, when combined with the data in which it is being shared, underscores the need for advanced data security capabilities so that security and risk management teams can regain governance, visibility, and acceptable use over GenAI usage within their organizations.”

Companies lack control over GenAI data

Many organizations lack full or even partial visibility into how data is being processed, stored, and leveraged within indirect GenAI usage. Oftentimes, they’re choosing to apply a “block first and ask questions later” policy by explicitly allowing certain apps and blocking all others. Yet, security leaders must look to pursue a safe enablement strategy as employees seek efficiency and productivity benefits from these tools.

An example is DeepSeek, which Netskope found that 91% of enterprises had users trying to access DeepSeek AI within weeks of its launch in January 2025. At the time, most businesses had no security policy in place for DeepSeek, exposing them to unknown risks. Additionally, employees could be unknowingly feeding AI proprietary business data – exposing source code, intellectual property, regulated data, and even passwords into generative AI apps.

“Our latest data shows GenAI is no longer a niche technology; it’s everywhere,” said Ray Canzanese, Director of Netskope Threat Labs. “It is becoming increasingly integrated into everything from dedicated apps to backend integrations. This ubiquity presents a growing cybersecurity challenge, demanding organizations adopt a comprehensive approach to risk management or risk having their sensitive data exposed to third parties who may use it to train new AI models, creating opportunities for even more widespread data exposures.”

Over the past year, Netskope also observed the number of organizations running GenAI infrastructure locally has increased dramatically, going from less than 1% to 54% and this trend is expected to continue. Despite reducing risks of unwanted data exposure to third-party apps in the cloud, the shift to local infrastructure introduces new types of data security risks from supply chains, data leakage, and improper data output handling to prompt injection, jailbreaks, and meta prompt extraction. As a result, many organizations are adding locally-hosted GenAI infrastructure on top of cloud-based GenAI apps already in use.

The rise of shadow AI

While most organizations use GenAI, a small but continually growing percentage of users are actively using GenAI Apps. The number of people using GenAI apps in the enterprise has nearly doubled over the past year, with an average of 4.9% of people in each organization using GenAI apps.

GenAI app adoption in the enterprise has followed the typical pattern of new cloud services: individual users using personal accounts to access the app. The result is that the majority of GenAI app use in the enterprise can be classified as shadow IT, a term used to describe solutions being used without the knowledge or approval of the IT department.

A newer term, shadow AI, was coined specifically for the special case of AI solutions. The term “shadow“ in shadow IT and shadow AI is meant to evoke the idea that the apps are hidden, unofficial, and operating outside of standard processes. Even today, more than two years after the release of ChatGPT kicked off the GenAI craze, 72% of GenAI users are still using personal accounts to access ChatGPT, Google Gemini, Grammarly, and other popular GenAI apps at work.

“AI isn’t just reshaping perimeter and platform security—it’s rewriting the rules,” said Ari Giguere, VP of Security and Intelligence Operations at Netskope.

99% of organizations are enforcing policies to reduce the risks associated with GenAI apps. These policies include blocking all or most GenAI apps for all users, controlling which specific user populations can use GenAI apps, and controlling the data allowed into GenAI apps. The following sections break down the particular policies that are most popular.

Don't miss