11 GenAI cybersecurity surveys you should read
Generative AI stands at the forefront of technological innovation, reshaping industries and unlocking new possibilities across various domains. However, as the integration of these technologies continues, a vigilant approach to ethical considerations and regulatory compliance is essential to ensure that the benefits of generative AI in cybersecurity are realized responsibly and sustainably.
In this article, you will find excerpts from generative AI surveys we covered in 2023. These findings can help with future cybersecurity strategies.
Smaller businesses embrace GenAI, overlook security measures
More than 900 global IT decision makers, although 89% of organizations consider GenAI tools like ChatGPT to be a potential security risk, 95% are already using them in some guise within their businesses.
ChatGPT’s popularity triggers global generative AI investment surge
While AI is not a new technology – companies have been investing heavily in predictive and interpretive AI for years – the announcement of the GPT-3.5 series from OpenAI in late 2022 captured the world’s attention and triggered a surge of investment in generative AI, according to IDC.
Tech leaders struggle to keep up with AI advances
New data reveals artificial intelligence is challenging organizations in significant ways, with only 15% of global tech leaders reporting they are prepared for the demands of generative AI and 88% saying stronger regulation of AI is essential, according to Harvey Nash.
Companies have good reasons to be concerned about generative AI
Regardless of the risks, optimism for the benefits of Generative AI for their business is strong: 82% of respondents reported high confidence that GenAI grants them a competitive advantage.
Security leaders have good reasons to fear AI-generated attacks
Generative AI is likely behind the increases in both the volume and sophistication of email attacks that organizations have experienced in the past few months, and it’s still early days, according to Abnormal Security.
Only a fraction of risk leaders are prepared for GenAI threats
Companies’ top generative AI concerns include data privacy and cyber issues (65%), employees making decisions based on inaccurate information (60%), employee misuse and ethical risks (55%), and copyright and intellectual property risks (34%).
GenAI investments surge, anticipated to hit $143 billion by 2027
The GenAI software segments will see the fastest growth over the 2023-2027 forecast with GenAI platforms/models delivering a CAGR of 96.4% followed by GenAI application development & deployment (AD&D) and applications software with an 82.7% CAGR.
Building GenAI competence for business growth
New competencies will be required to build and use GenAI models, such as ‘prompt engineers’ to write and test prompts for GenAI systems. Every organization must create a new skills map for core AI technologies and business capabilities to deploy GenAI at scale across the organization. Organizations should also build personalized training program for key roles.
Companies still don’t know how to handle generative AI risks
One of the biggest concerns about generative AI is the potential for misinterpretation. When generative AI can’t generate a correct answer to a question, it starts to invent one in a process called “artificial intelligence hallucination.”
Generative AI lures DevOps and SecOps into risky territory
45% of SecOps leaders have already implemented generative AI into the software development process, compared to 31% for DevOps. SecOps leaders see greater time savings than their DevOps counterparts, with 57% saying generative AI saves them at least 6 hours a week compared to only 31% of DevOps respondents.
IT leaders alarmed by generative AI’s SaaS security implications
IT leaders must now factor the effects of generative AI, such as ChatGPT, into their overall SaaS security approach. 23% of respondents said generative AI applications are the most concerning SaaS security issue.