Six steps for security and compliance in AI-enabled low-code/no-code development
AI is quickly transforming how individuals create their own apps, copilots, and automations. This is enabling organizations to improve output and increase efficiency—all without adding to the burden of IT and the help desk.
But while this transformation makes software development accessible to all, it can also lead to greater cybersecurity threats. Fortunately, it doesn’t have to be an either/or situation; it’s possible to develop AI applications and remain secure and compliant. The key is for security leaders to grasp AI development’s inherent risks and create a strategy for overcoming them.
Low code/no code embraces AI
Low-code and no-code platforms — with the help of AI — are the modern standard for making development accessible to everyone. This new way of developing apps, automation, and copilots gives any member of an organization the power of a developer, regardless of whether they have coding ability. Organizations have become increasingly reliant on these technologies to create greater productivity and efficiency as they encounter the need for constant innovation, tight deadlines, and fewer internal resources.
According to Gartner, more than 70% of all new applications will be generated by low-code/no-code development by 2025. Gartner also forecasts that more than three-quarters of organizations (80%) will have used generative AI (GenAI)-enabled apps in production environments and/or GenAI application programming interfaces (APIs) or models by 2026. This is a huge change, as less than 5% of organizations did so in 2023.
This new group of “citizen developers” interacts with various copilots that are embedded into the SaaS platforms where they work – a GenAI conversational interface – to build apps, automations, data flows and more. Now they can even create their own copilots, which they can then share across the ‘’stores’’ of the development platforms.
Security risks of low code/no code
This new development scenario creates two primary risks. The first is that production environments are no longer welcoming dozens or hundreds of apps but tens and hundreds of thousands of apps, automations and connections – all from users of varying technical backgrounds. Consequently, the threat landscape expands. Account impersonation and data leakage are the main threats (to be discussed below).
The second risk is that because these platforms are trying to make it easy for anyone to build an app, many default settings are included. But this means anyone can make mistakes during the development process – mistakes that create more work and worry for security teams. Those mistakes can include over-permissioning apps to be accessible to everyone at the organization, exposing sensitive data in plaintext, and hard-coding secrets into apps.
It’s standard for organizations that have security and compliance programs to focus on what their professional developers are doing. These days though, because of AI, anyone can be a developer. IT no longer has full visibility into the apps and automations that anyone can create. That’s a major problem for security teams, who can’t protect what they can’t see.
These risks affect almost all organizations, but particularly companies in heavily regulated industries like healthcare and finance. As more individuals create more apps with the help of AI, more sensitive data is accessed by more systems – without complete visibility into who’s creating what. The security team doesn’t know which apps are accessing truly sensitive data, which can lead to fines and greater regulatory scrutiny.
Six steps to regaining control while increasing productivity
These risks tempt organizations to forbid workers and third-party users from using these development tools, but that won’t fly in the real world. Once people find a helpful tool, they are reluctant to give it up. Forcing them to do so could reduce productivity, efficiency and innovation. What’s more, security leaders have come to a point where they need to demonstrate how they can enable business strategy rather than merely act as gatekeepers.
So then, the goal isn’t to prohibit these tools but to make using them safer. As alluded to earlier, visibility is paramount. The security team needs to know two things: what tools people are using and what apps they are developing, along with getting a full picture of the business impact of each app.
Six elements are necessary for security teams to have this level of visibility and act on its insights:
- Make security the priority. Create rules and meet with professional and citizen developers to make sure they abide by your company’s guidelines as they use GenAI for developing powerful apps, automations, and bots.
- Find every app that was built with and/or contains AI. Determine the business context for each of those resources: who uses it and why, what data it accesses and so on.
- Make sure that apps and automations that require access to sensitive data include the proper data sensitivity tags. They also need the proper access, anomaly detection and identity tools, and the proper authentication rules.
- Assess every resource for threats so that the security team understands how to prioritize alerts, violations and more.
- Use consistent vulnerability scanning to spot insecure and/or misconfigured apps while they are being built.
- Use the “least privilege” approach to ensure that only the appropriate people have access to each app. This addresses the default permissions used by many development platforms, which allows access and use of all apps to anyone in the tenant or directory.