Confidential AI: Enabling secure processing of sensitive data

In this Help Net Security interview, Anand Pashupathy, VP & GM, Security Software & Services Division at Intel, explains how Intel’s approach to confidential computing, particularly at the silicon level, enhances data protection for AI applications and how collaborations with technology leaders like Google Cloud, Microsoft, and Nvidia contribute to the security of AI solutions.

AI data protection

Why is data protection particularly critical for AI adoption?

Many companies today have embraced and are using AI in a variety of ways, including organizations that leverage AI capabilities to analyze and make use of massive quantities of data. Organizations have also become more aware of how much processing occurs in the clouds, which is often an issue for businesses with stringent policies to prevent the exposure of sensitive information. While AI can be beneficial, it also has created a complex data protection problem that can be a roadblock for AI adoption.

How does Intel’s approach to confidential computing, particularly at the silicon level, enhance data protection for AI applications?

Intel builds platforms and technologies that drive the convergence of AI and confidential computing, enabling customers to secure diverse AI workloads across the entire stack.

Confidential computing helps secure data while it is actively in-use inside the processor and memory; enabling encrypted data to be processed in memory while lowering the risk of exposing it to the rest of the system through use of a trusted execution environment (TEE). It also offers attestation, which is a process that cryptographically verifies that the TEE is genuine, launched correctly and is configured as expected. Attestation provides stakeholders assurance that they are turning their sensitive data over to an authentic TEE configured with the correct software. Confidential computing should be used in conjunction with storage and network encryption to protect data across all its states: at-rest, in-transit and in-use.

Confidential AI is the application of confidential computing technology to AI use cases. It is designed to help protect the security and privacy of the AI model and associated data. Confidential AI utilizes confidential computing principles and technologies to help protect data used to train LLMs, the output generated by these models and the proprietary models themselves while in use. Through vigorous isolation, encryption and attestation, confidential AI prevents malicious actors from accessing and exposing data, both inside and outside the chain of execution.

How does confidential AI enable organizations to process large volumes of sensitive data while maintaining security and compliance?

Confidential AI helps customers increase the security and privacy of their AI deployments. It can be used to help protect sensitive or regulated data from a security breach and strengthen their compliance posture under regulations like HIPAA, GDPR or the new EU AI Act. And the object of protection isn’t solely the data – confidential AI can also help protect valuable or proprietary AI models from theft or tampering. The attestation capability can be used to provide assurance that users are interacting with the model they expect, and not a modified version or imposter.

Confidential AI can also enable new or better services across a range of use cases, even those that require activation of sensitive or regulated data that may give developers pause because of the risk of a breach or compliance violation. This could be personally identifiable user information (PII), business proprietary data, confidential third-party data or a multi-company collaborative analysis. This enables organizations to more confidently put sensitive data to work, as well as strengthen protection of their AI models from tampering or theft.

Can you elaborate on Intel’s collaborations with other technology leaders like Google Cloud, Microsoft, and Nvidia, and how these partnerships enhance the security of AI solutions?

Intel takes an open ecosystem approach which supports open source, open standards, open policy and open competition, creating a horizontal playing field where innovation thrives without vendor lock-in. It also ensures the opportunities of AI are accessible to all.

Intel collaborates with technology leaders across the industry to deliver innovative ecosystem tools and solutions that will make using AI more secure, while helping businesses address critical privacy and regulatory concerns at scale. For example:

Google Cloud confidential VMs are leveraging Intel Trust Domain Extensions (Intel TDX) technology on 4th Gen Intel Xeon Scalable CPUs so customers can run their AI models and algorithms in a TEE.

Microsoft Azure Intel TDX-based confidential virtual machines are being powered by 4th Gen Intel Xeon Scalable processors to enable organizations to bring confidential workloads to the cloud without code changes to applications.

Nvidia is collaborating with Intel to offer comprehensive attestation services for Nvidia H100 GPUs via Intel TDX and Intel Tiber Trust Services. These services help customers who want to deploy confidentiality-preserving AI solutions that meet elevated security and compliance needs and enable a more unified, easy-to-deploy attestation solution for confidential AI.

How do Intel’s attestation services, such as Intel Tiber Trust Services, support the integrity and security of confidential AI deployments?

Intel’s confidential AI technology combines proven solutions, such as Intel Trust Domain Extensions (Intel TDX), Intel Software Guard Extensions (Intel SGX) and most recently, independent attestation by Intel Tiber Trust Services, to help protect AI data and models, and verify the authenticity of assets and the computing environments where those assets are used.

The attestation service offered under Intel Tiber Trust Services offers a unified, independent assessment of TEE integrity and policy enforcement, and audit records anywhere Intel’s Confidential Computing is deployed including multiple cloud, hybrid, on-prem and edge. It embodies zero trust principles by separating the assessment of the infrastructure’s trustworthiness from the provider of infrastructure and maintains independent tamper-resistant audit logs to help with compliance. 

How should organizations integrate Intel’s confidential computing technologies into their AI infrastructures?

For businesses to trust in AI tools, technology must exist to protect these tools from exposure inputs, trained data, generative models and proprietary algorithms. Intel’s latest enhancements around Confidential AI utilize confidential computing principles and technologies to help protect data used to train LLMs, the output generated by these models and the proprietary models themselves while in use.

Intel software and tools remove code barriers and allow interoperability with existing technology investments, ease portability and create a model for developers to offer applications at scale. This provides modern organizations the flexibility to run workloads and process sensitive data on infrastructure that’s trustworthy, and the freedom to scale across multiple environments.

Don't miss