Home / Insights / Political agreement on the EU's framework for artificial intelligence

Political agreement on the EU's framework for artificial intelligence

The EU has been working on creating its own framework for artificial intelligence, known as the AI Act, since 2019. On Wednesday, December 6th, the stage was set for the final planned negotiation meeting between the legislative institutions of the EU.

There was considerable anticipation regarding whether the negotiations would be successfully concluded during this meeting. Without an agreement, there was a risk of significant delays in the adoption of the framework. This was undesirable for EU institutions that have invested considerable prestige in leading the regulation of AI globally. Read more about the most contentious issues leading up to the last negotiation meeting in this article.

After a marathon negotiation session, political agreement on the AI Act was finally reached on Friday, December 8th. There is little doubt that this is considered a significant political victory for all involved, and EU Commissioner for the Internal Market Thierry Breton goes so far as to call the agreement historic.

In broad terms, the political agreement follows the main points of the Commission’s initial draft of the AI Act from 2021. The purpose remains to create a framework that ensures AI in Europe is safe and respects fundamental human rights and democracy, while also facilitating innovation and investment. The framework will be a regulation with a risk-based approach, categorizing all AI systems into four risk groups: systems with unacceptable risk, systems with high risk, systems with limited risk, and systems with minimal or no risk. Systems with unacceptable risk are prohibited, while systems involving high risk are allowed under strict regulation. For the other categories, the regulation is limited.

Below is a brief summary of the main features of the material content of the political agreement and the changes from the earlier drafts of the AI Act:

  • Scope of the Regulation: There is now agreement on a definition of what constitutes an “AI system.” The definition is intended to be future-proof and technology-neutral, following the approach of the updated OECD definition. The AI Act will not apply to AI systems used exclusively for defence and military purposes, for research and innovation, and for non-commercial purposes.
  • Exception for AI systems classified as unacceptable risk: The list of AI systems considered to pose unacceptable risk and thus prohibited has been expanded compared to the EU Commission’s original proposal. At the same time, exceptions have been introduced for the prohibition of real-time biometric identification in public spaces, the so-called “mass surveillance ban.”
  • General-purpose AI (GPAI): New rules are introduced for GPAI systems and models. All such systems and models will be subject to transparency requirements, including the preparation of technical documentation, compliance with copyright, and the sharing of detailed summaries of the training data used. In addition, special requirements are introduced for so-called ‘high-impact’ models, including the implementation of measures to reduce system risk, the conduct of specific forms of testing, and reporting on significant events and energy efficiency.
  • Governance: National authorities will oversee the implementation of the AI Act at the national level, while at the European level, an AI Office will be established under the EU Commission. The AI Office will contribute to creating standards and testing procedures and enforcing the rules for GPAI systems and models. An AI Board consisting of representatives from member states will have a coordinating role and be an advisory body to the Commission. An advisory forum for business and academia will also be established to provide technical expertise.
  • Sanctions: Violations of the AI Act will be penalized with fines equal to the higher of a predefined amount or a percentage of the company’s annual global revenue. The amounts are 35 million euros or 7% for violations of the rules on prohibited AI systems, 15 million euros or 3% for violations of other obligations in the AI Act, and 7.5 million euros or 1.5% for providing incorrect information. Separate limits are intended for SMEs and start-ups.

The next step is for the regulation text to be formally adopted by the European Parliament and the Council and published in the Official Journal. After that, the framework will become effective in the EU after two years, although some parts of the regulation will take effect earlier, including the prohibitions after six months and the GPAI rules after twelve months.

For further reading, refer to the press releases from the EU Commission, the European Parliament, the Council, and the joint press conference held on December 9th.