AI Act
Purpose: In the AI Act, AI is regulated through a risk-based approach, categorizing AI systems into four levels of risk: unacceptable, high, limited, and minimal. Strict requirements are imposed on high-risk applications, including transparency, data quality, and human oversight to ensure safety and compliance. Lower-risk applications face fewer obligations, promoting innovation while safeguarding fundamental rights and public interests. Additionally, the Act regulates General Purpose AI (“GPAI”) models.
Scope: The AI Act is especially relevant for developers of AI, businesses, public bodies that have AI systems developed for their own use, and anyone that make use of AI in the course of a professional activity. It sets out obligations that these stakeholders must meet to ensure their AI systems are safe, ethical, and compliant with EU regulations.
It is important to note that deployers of GPAI can transition into becoming providers of AI. This shift entails assuming additional responsibilities beyond those associated with merely using the technology.
Core obligations: One’s obligations under the Act may vary depending on one’s role and the risk associated with the AI system or GPAI. The obligations under the AI Act can include conducting thorough risk assessments, ensuring transparency and explainability of AI systems, adhering to high standards of data governance, and maintaining continuous monitoring and reporting of AI system performance.
- EU Artificial intelligence Act:
Regulation – EU – 2024/1689 – EN – EUR-Lex
The AI Act Explorer | EU Artificial Intelligence Act
- August 2024 is the date of entry into force of the AI Act. At this stage, none of the Act’s requirements apply—they will begin to apply gradually over time.
- EEA relevant – will be implemented in Norway. Not known when this will happen.