Understanding the AI Act and its compliance challenges

In this Help Net Security interview, David Dumont, Partner at Hunton Andrews Kurth, discusses the implications of the EU AI Act and how organizations can leverage existing GDPR frameworks while addressing new obligations such as conformity assessments and transparency requirements.

Dumont also outlines strategies for mitigating risks from national-level enforcement variations and third-party AI vendors.

AI Act compliance

The impact of the AI Act is often compared to that of the GDPR. Do you perceive any fundamental compliance challenges that businesses might not yet understand?

When developing our firm’s EU AI Act guide for in-house counsel, we have identified a significant number of areas where organizations will be able to leverage their current GDPR experience and compliance efforts for their EU AI Act compliance journey. Similar to the GDPR, the AI Act, for example, sets forth accountability, data quality and management, governance, vendor diligence, training, risk assessment and transparency obligations concerning the development and use of AI systems. Organizations that have put in place a comprehensive GDPR compliance program will be able to build on their existing policies, procedures, notices and other compliance infrastructure to address their obligations under the EU AI Act in these areas.

That said, while there are significant synergies between GDPR and AI Act compliance, organizations must build some key elements of their AI Act compliance programs from scratch. For providers of high-risk AI systems, there are, for example, conformity assessment obligations that have no overlap with the GDPR and, hence, may be entirely new to the business.

Given that the AI Act does not regulate possible criminal liability for misuse of AI, how should companies prepare for varying national-level enforcement?

The AI Act introduces enforcement powers for national supervisory authorities, including the power to impose significant administrative fines, that are harmonized at EU level. That said, the Act also enables EU Member States to establish enforcement rules in their national laws. These national enforcement rules may include criminal liability for AI misuse.

It is important for organizations to closely monitor legal developments in the EU Member States where they plan to introduce or use AI systems to ensure that they are aware of any local deviations that may affect their liability and risk exposure. Ideally, the creation of additional local enforcement rules is kept to a minimum as such national rules may result in fragmentation of the EU legal framework around AI, which could lead to legal uncertainty and hinder innovation.

Since the AI Act is still evolving, where do you anticipate the need for further clarifications from regulators or industry bodies?

The EU AI Act is the first comprehensive legal framework regulating the development and use of AI. As such, the Act introduces a significant number of new legal concepts, which are intentionally defined broadly to make the legislation more future proof. Similar to the GDPR, further guidance and practical application will be required to fully understand how these concepts should be interpreted in practice.

Article 96 of the AI Act identifies several areas where the European Commission is required to develop and issue guidelines to assist organizations in complying with the new rules. In fact, the European Commission recently published its first two guidelines on the definition of AI under the AI Act and the prohibited AI practices that became applicable on February 2. Plenty more guidelines are in the pipeline, including on the classification and requirements for high-risk AI systems, transparency requirements, the concept of substantial modifications, and the interplay between the AI Act and current EU product safety legislation.

In addition to guidelines, codes of practice and standards will likely play an important role in the AI Act’s practical implementation. The ongoing work of the EU AI Office on the code of practice for general-purpose AI models may, for example, have a significant influence on how the rules on general-purpose AI model will be interpreted and applied in practice.

The AI Act requires transparency, especially for high-risk AI systems. However, what practical challenges do businesses face in meeting these requirements while protecting trade secrets and intellectual property?

There is a clear tension between the transparency obligations imposed on providers of certain AI systems under the AI Act and some of their rights and business interests, such as the protection of trade secrets and intellectual property. The EU legislator has expressly recognized this tension, as multiple provisions of the AI Act state that transparency obligations are without prejudice to intellectual property rights. For example, Article 53 of the AI Act, which requires providers of general-purpose AI models to provide certain information to organizations that wish to integrate the model downstream, explicitly calls out the need to observe and protect intellectual property rights and confidential business information or trade secrets.

In practice, a good faith effort from all parties will be required to find the appropriate balance between the need for transparency to ensure safe, reliable and trustworthy AI, while protecting the interests of providers that invest significant resources in AI development. Supervisory authorities will have to be realistic and work with AI providers to find the appropriate balance.

Many companies rely on third-party AI vendors. How should in-house lawyers assess and mitigate risks when AI is sourced externally rather than developed in-house?

It is important for in-house lawyers to conduct appropriate diligence before third party AI systems are rolled out within the organization. The AI Act imposes a number of obligations on AI system vendors that will help in-house lawyers in carrying out this diligence. Under Article 13 of the AI Act, vendors of high-risk AI systems are, for example, required to provide sufficient information to (business) deployers to allow them to understand the high-risk AI system’s operation and interpret its output.

In addition, vendors will be required to draft detailed technical instructions regarding the high-risk AI system. These instructions can be a valuable tool for in-house lawyers in determining the appropriate internal measures that should be put to manage the risks related to the use of the concerned AI system.

Organizations should consider updating the existing vendor screening procedures in light of the AI Act. For example, vendor questionnaires are a valuable tool in obtaining information from vendors at an early stage of negotiations to understand their level of maturity around AI compliance. In addition, through these questionnaire organizations deploying the AI system can request targeted information that they will need to comply with their own obligations under the AI Act, such as the information needed to carry out a fundamental rights impact assessment and/or a data protection impact assessment (as applicable).

Don't miss