Enterprises walk a tightrope between AI innovation and security
AI/ML tool usage surged globally in 2024, with enterprises integrating AI into operations and employees embedding it in daily workflows, according to Zscaler.
The report reveals a 3,000+% year-over-year growth in enterprise use of AI/ML tools, highlighting the rapid adoption of AI technologies across industries to unlock new levels of productivity, efficiency, and innovation. Findings are based on analysis of 536.5 billion total AI and ML transactions in the Zscaler cloud from February 2024 to December 2024.
Enterprises are sending significant volumes of data to AI tools, totaling 3,624 TB, underscoring the extent to which these technologies are integrated into operations. However, this surge in adoption also brings heightened security concerns. Enterprises blocked 59.9% of all AI/ML transactions, signaling enterprise awareness around the potential risks associated with AI/ML tools, including data leakage, unauthorized access, and compliance violations.
Threat actors are also increasingly leveraging AI to amplify the sophistication, speed, and impact of attacks—forcing enterprises to rethink their security strategies.
“As AI transforms industries, it also creates new and unforeseen security challenges. Data is the gold for AI innovation, but it must be handled securely,” said Deepen Desai, Chief Security Officer at Zscaler.
ChatGPT dominates AI/ML transactions
ChatGPT emerged as the most widely used AI/ML application, driving 45.2% of identified global AI/ML transactions. Yet, it was also the most-blocked tool due to enterprises’ growing concerns over sensitive data exposure and unsanctioned use. Other most-blocked applications include Grammarly, Microsoft Copilot, QuillBot, and Wordtune, showing broad usage patterns for AI-enhanced content creation and productivity improvements.
Enterprises are walking an increasingly narrow tightrope between AI innovation and security. As AI adoption keeps growing, organizations will have to tighten the reins on risks while still harnessing the power of AI/ML to stay competitive.
AI is amplifying cyber risks, with usage of agentic AI and China’s open-source DeepSeek enabling threat actors to scale attacks. So far in 2025, we’ve seen DeepSeek challenge American giants like OpenAI, Anthropic, and Meta, disrupting AI development with strong performance, open access, and low costs. However, such advancements also introduce significant security risks.
Historically, the development of frontier AI models was restricted to a small group of elite “builders”—companies like OpenAI and Meta that poured billions of dollars into training massive foundational models. These base models were then leveraged by “enhancers” who built applications and AI agents on top of them, before reaching a broader audience of “adopters” or end users.
DeepSeek has disrupted this structure by dramatically lowering the cost of training and deploying base LLMs, making it possible for a much larger pool of players to enter the AI space. Meanwhile, with the release of xAI’s Grok 3 model, the company has announced that Grok 2 will become open source—meaning that, together with the likes of Mistral’s Small 3 model, users have even more choice when it comes to open source AI.
Industries are bumping up efforts to secure AI/ML transactions
The United States and India generated the highest AI/ML transaction volumes, representing the global shift toward AI-driven innovation. However, these changes aren’t occurring in a vacuum, and organizations in these and other geographies are grappling with increasing challenges like stringent compliance requirements, high implementation costs, and shortage of skilled talent.
The finance and insurance sector accounted for 28.4% of all enterprise AI/ML activity, reflecting its widespread adoption, and indicative of the critical functions supported by the industry, such as fraud detection, risk modeling, and customer service automation. Manufacturing was second, accounting for 21.6% of transactions, likely driven by innovations in supply chain optimization and robotics automation.
Additional sectors, including services (18.5%), technology (10.1%), and healthcare (9.6%), are also increasing their reliance on AI, while each industry also faces unique security and regulatory challenges posing new risks and possibly impacting the overall rate of adoption.
Industries are also bumping up efforts to secure AI/ML transactions, but the volume of blocked AI/ML activity varies. Finance and insurance blocks 39.5% of AI transactions. This trend aligns with the industry’s stringent compliance landscape and the need to safeguard financial and personal data.
Manufacturing blocks 19.2% of AI transactions, suggesting a strategic approach where AI is widely used but monitored closely for security risks, whereas services takes a more balanced approach, blocking 15% of AI transactions. On the other hand, healthcare blocks only 10.8% of AI transactions. Despite handling vast amounts of health data and PII, healthcare organizations are still lagging in securing AI tools, with security teams catching up to rapid innovation.
Deepfakes will become a massive fraud vector across industries
As enterprises integrate AI into their workflows, they must also confront the risks of shadow AI—the unauthorized use of AI tools that can lead to data
leaks and security blind spots. Without proper controls, sensitive business information could be exposed, retained by third-party AI models, or even used
to train external systems.
GenAI will elevate social engineering attacks to new levels in 2025 and beyond, particularly in voice and video phishing. With the rise of GenAI-based tooling, initial access broker groups will increasingly use AI-generated voices and video in combination with traditional channels. As cybercriminals adopt localized languages, accents, and dialects to increase their credibility and success rates, it will become harder for victims to identify fraudulent communication.
As enterprises and end users rapidly adopt AI, threat actors will increasingly capitalize on AI trust and interest through fake services and tools designed to facilitate malware, steal credentials, and exploit sensitive data.
Deepfake technology will fuel a new wave of fraud, extending beyond manipulated public figure videos to more sophisticated scams. Fraudsters are already using AI-generated content to create fake ID cards, fabricate accident images for fraudulent insurance claims, and even produce counterfeit X-rays to exploit healthcare systems.
As deepfake tools become more advanced and accessible—and their outputs more convincing—fraud will be harder to detect, undermining identity verification and trust in communications.
A strategic, phased approach is essential to securely adopting AI applications. The safest starting point is to block all AI applications to mitigate potential
data leakage. Then, progressively integrate vetted AI tools with strict access controls and security measures to maintain full oversight of enterprise data.