General-Purpose AI Models (GPAI): Publication of the Code of Practice and Guidelines

General-Purpose AI Models (GPAI): Publication of the Code of Practice and Guidelines

Key Takeaways

In July 2025, the European Commission published the Code of Practice and Guidelines for General Purpose AI Models (GPAI), with the provisions relating to general purpose AI models set to come into force on August 2, 2025.


In line with the requirements of the Regulation on Artificial Intelligence (AI Act), the European Commission released the Code of Practice and Guidelines for General-Purpose AI (GPAI) models in July 2025. These documents were published just in time before the GPAI-related provisions of the AI Act come into effect on August 2.

As a reminder, the concept of a “general-purpose AI model” (or GPAI model) is defined in Article 3 as “an AI model (...) that displays significant generality and is capable of competently performing a wide range of distinct tasks, regardless of the way the model is placed on the market, and that can be integrated into a variety of downstream systems or applications (…).

This article provides a summary of the contents of the Code of Practice and of the Guidelines for general-purpose AI models.

1. The General-Purpose AI Code of Practice

The General-Purpose AI Code of Practice, as provided under Article 56 of the AI Act, was published by the European Commission on July 10, 2025. (1)

This document outlines the obligations that GPAI model providers must meet to comply with the AI Act. It is divided into three chapters.

The first two chapters, focusing on transparency and copyright, apply to all providers of GPAI models. They include the following obligations:

     - Provide technical documentation to the AI Office and to the national competent authorities upon request;
     - Provide relevant information to downstream providers seeking to integrate a GPAI model into their AI system or GPAI;
     - Produce a summary of the training data used;
     - Establish a copyright policy in line with applicable regulations.

The third chapter addresses safety and security and applies only to GPAI models deemed to present a systemic risk. It sets out the following additional obligations:

     - Evaluate state-of-the-art models;
     - Assess and mitigate risks throughout the model’s lifecycle;
     - Report serious incidents, including the corrective measures taken;
     - Implement robust cybersecurity measures.

GPAI models considered to present a systemic risk are those presumed to have a significant impact on public health, safety, public security, fundamental rights, or society at large. By default, this includes GPAI models with a computational capacity exceeding 10²⁵ FLOPS. Currently, it is estimated that around ten providers worldwide offer GPAI models that meet this threshold for systemic risk.

Adherence to the Code of Practice is not mandatory. As with any code of practice, compliance is voluntary and allows AI providers to demonstrate compliance with the AI Act. Providers that choose not to adhere to the Code will be subject to a more burdensome procedure to demonstrate regulatory compliance, particularly with respect to Articles 53 and 55.

As of today, approximately twenty GPAI model providers have announced their adherence to the General-Purpose AI Models Code of Practice, including Amazon, Anthropic, Google, IBM, Microsoft, Mistral AI, and OpenAI.

2. Guidelines for General-Purpose AI Models

On July 18, 2025, the European Commission published the “Guidelines on the Scope of the Obligations for General-Purpose AI Models.” (2) The purpose of these guidelines, which are mandatory for affected providers, is to clarify the definitions and obligations imposed on them under the AI Act. These obligations took effect on August 2, 2025.

The definition of a GPAI model is further specified using thresholds and criteria to determine whether an AI system qualifies as a GPAI model and, if so, whether the corresponding provisions of the AI Act apply.

Specifically, systems using more than 10²³ FLOPS that are capable of generating linguistic outputs (in text or audio form), text-to-image, or text-to-video outputs are considered GPAI models. The assessment takes into account both the model size (i.e., number of parameters) and the volume of training data.

Models that exceed this threshold but are domain-specific (e.g., gaming or weather forecasting models) are excluded from this definition if they are not capable of performing a wide range of distinct tasks.

If a system qualifies as a GPAI model, lifecycle-related obligations defined in the AI Act apply from the pre-training phase onward, including all subsequent phases. These obligations are the same as those listed in the Code of Practice. Notably, open-source GPAI models, released under a free and open license, are not required to provide technical documentation to the AI Office, national competent authorities, or downstream providers.

GPAI models with computational capacity exceeding 10²⁵ FLOPS are generally classified as systemic-risk GPAI models. The additional obligations applicable to such models are the same as those outlined in the Code of Practice.

Providers are required to inform the Commission within two weeks if they expect to reach, or have reached, the 10²⁵ FLOPS threshold.

The method for calculating whether a general-purpose AI model falls under the systemic-risk category is detailed in an annex to the guidelines.

It is specified that AI models used solely for research, development, or prototyping purposes prior to market release are not subject to these obligations.

The guidelines also provide criteria for identifying the provider of a general-purpose AI model.

A provider is defined as “a natural or legal person, public authority, agency, or any other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge.

Accordingly, a company that develops and markets a GPAI model on the European market is considered the provider of the model.

If multiple entities are involve, for example if one company develops a model on behalf of another entity that subsequently markets the model in the European market, the entity placing the model on the market is considered the provider. In cases where a model is developed by or for a group of companies or a consortium, the provider may be the coordinator or the consortium itself.

* * * * * * * * * * * *


(1) The General-Purpose AI Code of Practice

(2) Guidelines on the scope of the obligations for general-purpose AI models established by Regulation (EU) 2024/1689 (AI Act). At this time, the guidelines are available in English only.

Bénédicte DELEPORTE
Avocat

Deleporte Wentz Avocat
www.dwavocat.com

July 2025