ExtraHop protects organizations from accidental misuse of AI tools
ExtraHop released a new capability that offers organizations visibility into employees’ use of AI as a Service (AIaaS) and generative AI tools, like OpenAI ChatGPT.
Organizations can now benefit from a better understanding of their risk exposure and whether or not these tools are being used in adherence with AI policies.
As generative AI and AIaaS are increasingly adopted within enterprise settings, C-level executives are concerned that proprietary data and other sensitive information are being shared with these services. While AIaaS offers productivity improvements across a range of industries, organizations must be able to audit employee use – and potential misuse – of these tools to protect against the accidental exposure of confidential data.
“Organizations using AIaaS solutions run the risk of employees sharing proprietary data, leading to the loss of IP and customer data,” said Chris Kissel, Research VP of Security Products, IDC. “ExtraHop is addressing this risk to the enterprise by giving customers a mechanism to audit compliance and help avoid the loss of IP. With its strong and rich background in network intelligence, ExtraHop can provide unparalleled visibility into the flow of data related to generative AI.”
To help determine whether sensitive data may be at risk, ExtraHop offers customers visibility into devices and users on their networks that are connecting to external AIaaS domains, the amount of data employees are sharing with these services, and in some cases, the type of data and individual files that are being shared.
“Customers have expressed a real concern about employees sending proprietary data and other sensitive information into AI services, and until today, there has been no good way to assess the scope of this problem,” said Patrick Dennis, CEO, ExtraHop.
“Amid the proliferation of AIaaS, it’s extremely important that we give customers the tools they need to see what is happening across the network, what data is being shared, and what could be at risk. With this new capability, our goal is to ensure that they can reap the wide-ranging benefits of generative AI while still maintaining data protections,” added Dennis.