The EU Artificial Intelligence Act (the "AI Act") is a legal framework for the regulation of AI in the EU. While it became law on 1 August 2024, many of its provisions won't come into effect until 2025-2026.
In preparation for when the majority of the provisions of the AI Act come into force, in this article we set out how the AI Act will be enforced, and the penalties for non-compliance with the legislation.
The AI Office was unveiled on 29 May 2024. It sits within the European Commission (the "Commission"). The AI Office undertakes the Commission’s function to supervise the use of AI systems and general-purpose AI models and take enforcement action against non-compliance.
In order to do this, the AI Office will have a constructive role to play in developing model contract terms, guidelines and templates, facilitating drawing up codes of practice, particularly in relation to general-purpose AI models (see our article here in relation to the AI Office's consultation on the first general-purpose AI Code of Practice and the published first draft here), receive reports of serious incidents, and collaborate closely with the competent bodies in Member States. The AI Office is also responsible for fostering the development and use of trustworthy AI across the EU. In order to do this, it will provide advice on best practices and access to AI sandboxes, as well as other European support structures for AI uptake (such as the European Digital Innovation Hubs). It will also support research and innovation activities in the fields of AI and robotics.
Crucially, the AI Office will have the exclusive power to monitor, supervise and enforce against providers of general-purpose AI models. These exclusive powers include powers to request documentation and information, conduct evaluations on general-purpose AI models to assess compliance or investigate systemic risks, request access to the model, including the source code, and where necessary and appropriate, the AI Office may request providers to take measures to comply with its obligations, implement mitigation measures, or restrict the making available on the market, withdraw or recall the model. Failure to comply with any of these powers could result in a fine (see the Penalties section below).
Notably, Article 75 also provides that where an AI system is based on a general-purpose AI model, and the model and the system are developed by the same provider, the AI Office will also have powers to monitor and supervise compliance of that AI system and will have all powers of a market surveillance authority (outlined under the National Competent Authorities section below).
The AI Office is supported by the AI Board (the "Board"), which will be composed of one representative per Member State. Article 66 sets out that it is the Board's role to advise and assist the Commission and Member States to facilitate the consistent and effective application of the AI Act. The Board can be thought of as playing a role similar to the European Data Protection Board ("EDPB") under the GDPR.
Its particular tasks include:
An advisory forum will also be established to provide technical expertise and advise the Board and the Commission, and to contribute to their tasks under the Act.
The Commission will also be supported by a scientific panel of independent experts to support the enforcement activities under the Act.
National Competent Authorities ("NCAs") are responsible for implementing the AI Act on a national level.
Each Member State must designate by 2 August 2025 at least one or more of the following bodies to act as NCAs in their particular jurisdiction:
Member States must ensure that their NCAs are provided with adequate technical, financial and human resources, and with infrastructure to fulfil their tasks effectively under the Act. By 2 August 2025 and thereafter once every two years, each Member State must report to the Commission the status of the financial and human resources of the NCAs, with an assessment of their adequacy. The NCAs may provide guidance and advice on the implementation of the Act.
The EDPB has suggested that each Member State's current data protection authority should be appointed as the market surveillance authority for high-risk AI systems that are likely to impact "natural persons' rights and freedoms with regard to the processing of personal data". Please see our article here for more information on the EDPB's statement.
Notified bodies are conformity assessment bodies notified in accordance with the Act and other relevant EU harmonisation legislation. They are responsible for performing the conformity assessment activities for high-risk AI systems, including testing, certification and inspection, according to the procedures set out in Article 43. They will be monitored by the notifying authorities referred to above.
Notified bodies must be independent of the provider of a high-risk AI system in relation to which they perform conformity assessment activities, of any other operator that has an economic interest in the high-risk AI systems assessed, and of any competitors of the provider.
|
Breach |
Maximum penalty |
|
Non-compliance with the prohibition on certain AI practices. Note that while the provisions in relation to these prohibitions come into effect on 2 February 2025, the penalties will apply from 2 August 2025. |
Whichever is highest of:
|
|
Non-compliance of an AI system with any of the provisions related to operators or notified bodies (other than the prohibited AI practices). These provisions are:
These penalties apply from 2 August 2025 while most obligations in relation to high-risk AI systems or in respect of transparency do not come into force until 2 August 2026. |
Whichever is highest of:
|
|
Supply of incorrect, incomplete or misleading information to notified bodies or national competent authorities in reply to a request. These penalties apply from 2 August 2025. |
Whichever is highest of:
|
When deciding whether to impose a fine and the amount of the fine, all relevant circumstances of the specific situation shall be taken into account, and the national competent authorities shall also take into account:
In the case of SMEs, each fine referred to above will be the lower of the percentage or amount.
Article 101 sets out that providers of general-purpose AI models will be fined when they:
The penalty for infringing the Act will result in a fine of either €15 million or 3% of worldwide annual turnover, whichever is highest.
While the provisions in relation to general-purpose AI will come into effect on 2 August 2025, the penalty provisions will not apply until 2 August 2026.
With this complex network of regulatory bodies which will each have broad powers, organisations should begin establishing an AI governance strategy and practices to ensure that they are compliant with the law.