Related Articles
AI at work - navigating opportunities and legal risks in the employment lifecycle
Find out more
The EU AI Act introduces obligations for providers of general-purpose AI (GPAI) models, including those with systemic risk, defining GPAI models as AI capable of performing a wide range of tasks and requiring integration into downstream systems. Providers must adhere to documentation, cooperation, and risk mitigation requirements, with additional obligations for models with systemic risk, such as adversarial testing and cybersecurity measures. The Act, effective from August 2, 2025, with a grace period for fines until August 2, 2026, grants the European Commission exclusive enforcement powers, including fines up to 3% of annual worldwide turnover or €15 million. A scientific panel will support compliance monitoring, and providers can demonstrate compliance through codes of practice and harmonized standards.
The EU Artificial Intelligence Act (the AI Act) is set to become a landmark regulation governing artificial intelligence (AI). It introduces requirements and responsibilities for providers (and those treated as providers) of generalpurpose AI (GPAI) models. With respect to GPAI models, a provider is a natural or legal person or body that develops or has a GPAI model developed and places that model on the market under its own name or trademark, whether for payment or free of charge. This includes organizations that outsource the development of a GPAI model and then place it on the market. The concept of GPAI models was not in the original text of the AI Act when it was first proposed in 2021. But articles and soon a chapter on GPAI models were added after the proliferation of models like OpenAI's GPT-3, generating much debate during negotiations for the Act. In this Insight article, Katie Hewson and Eva Lu, from Stephenson Harwood LLP, examine the definition of GPAI models under the Act, as well as the sub-category of GPAI models with systemic risk and the obligations of providers of these models.
A GPAI model is defined as an 'AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development, or prototyping activities before they are placed on the market.'
From this definition, the key characteristics of these models include:
Prominent examples of such models are GPT-4, DALL-E, Google BERT, or Midjourney 5.1.
It is likely that models that are modified or fine-tuned into new models could also constitute a separate GPAI model. More difficult questions of definition are also likely to arise as large language models (LLMs) are replaced by small language models that may not be said to display 'significant generality,' and which perform a narrower range of tasks in specific contexts or applications.
In addition, a GPAI model will be classified as a GPAI model with systemic risk if it meets one of the requirements in Article 51(1), namely:
Systemic risk is defined under the Act as 'a risk that is specific to the high-impact capabilities of GPAI models, having a significant impact on the EU market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain.'
Some of the systemic risks identified include any actual or reasonably foreseeable negative effects in relation to major accidents, disruptions of critical sectors and serious consequences to public health and safety, any actual or reasonably foreseeable negative effects on democratic processes, public and economic security, and the dissemination of illegal, false, or discriminatory content.
'High-impact capabilities' is defined under the Act to mean 'capabilities that match or exceed the capabilities recorded in the most advanced GPAI models.'
The provider of a GPAI model can present arguments that the model should not be classified as a GPAI model with systemic risk, but the European Commission will make the conclusion. The European Commission will also publish a list of GPAI models with systemic risk, without prejudice to the need to observe and protect intellectual property rights and confidential business information or trade secrets under EU laws.
It is worth noting that while GPAI models appear to be a separate category to other risk-based AI systems under the AI Act, GPAI models, when integrated into an AI system, will be regulated under the Act according to the risk of that AI system, for example, prohibited, high-risk, or those with specific transparency risk.
Specifically, underArticle 25(4) as part of the AI value chain of a high-riskAI system, providers of GPAI models will need to assist and enable the provider of a high-risk AI system to fully comply with the obligations of the Act.
Further, underArticle 50(2), providers of AI systems, including GPAI systems, generating synthetic audio, image, video, or text content must ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated.
A GPAI system is defined under the Act as 'an AI system which is based on a GPAI model and which has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems.'
The obligations of providers of GPAI models are set out in Article 53 of the AI Act, which are to:
The first two documentation obligations do not apply to GPAI models (but not those with systemic risk) that are released under a free and open-source license that allows for the access, usage, modification, and distribution of the model, and whose parameters, including the weights, the information on the model architecture, and the information on model usage, are made publicly available.
Additional obligations on authorized representatives of providers of GPAI models are set out in Article 54.
In addition to the obligations in Articles 53 and 54 of the Act, providers of GPAI models with systemic risk must also comply with Article 55, which targets the specific risks associated with these models. The obligations are to:
Providers of GPAI models that have been placed on the market before August 2, 2025, must take the necessary steps to comply with the obligations in the Act by August 2, 2027. For new models, obligations come into force on August 2, 2025. There will be a one-year grace period for fines on providers of GPAI models, which will not commence until August 2, 2026.
Unlike other AI systems under the AI Act, underArticle 88 the European Commission (via the newly formed AI Office) has exclusive powers to supervise and enforce obligations of GPAI models. The AI Office may take the necessary actions to monitor the effective implementation and compliance with the Act by providers of GPAI models, including:
A scientific panel of independent experts will also be formed to support the monitoring activities and provide qualified alerts to the AI Office.
The European Commission may impose fines on providers of GPAI models up to 3% of their annual total worldwide turnover in the preceding financial year or €15 million, whichever is higher.
Providers of GPAI models can rely on codes of practice and harmonized standards to demonstrate compliance with obligations under the Act. On July 30, 2024, the AI Office opened a call for expression of interest to participate in the drawing up of the first GPAI Code of Practice as well as a multi-stakeholder consultation on trustworthy GPAI models under the Act.