The EU has finally reached a landmark agreement on the Artificial Intelligence Act, ensuring AI prioritizes safety, fundamental rights, and democracy while fostering business growth. The co-legislators have agreed on a comprehensive ban targeting AI systems that pose a threat to fundamental freedoms and societal integrity.
- Biometric categorization using sensitive characteristics like political beliefs, sexual orientation, and race.
- Untargeted scraping of facial images for facial recognition databases.
- Emotion recognition in workplaces and educational institutions.
- Social credit scoring based on personal characteristics.
- AI systems manipulating human behavior to bypass free will.
- Exploitative use of AI targeting vulnerabilities due to age, disability, or socio-economic status.
High Risk AI
- Biometric identification and categorization: AI systems used for biometric identification (e.g., facial recognition) where these systems are used in public spaces for law enforcement purposes.
- Critical infrastructure: AI systems used in critical infrastructure sectors such as transport, energy, healthcare, and parts of the public sector that could put the life, health, or safety of individuals at risk.
- Education and vocational training. Ai systems used used for educational admission, evaluation, or learning guidance.
- Safety components of products: AI systems used as safety components of products (e.g., AI in autonomous vehicles) that are required to meet safety standards.
- Recruitment and evaluation for access to essential services: AI systems used for evaluating individuals (e.g., credit scoring, job recruitment) where this can significantly impact an individual’s access to education, employment, healthcare services, or essential private and public services.
- Law enforcement: AI systems used by law enforcement for the assessment of reliability or veracity of evidence.
- Migration and asylum: AI systems used for determining individuals’ legal status and rights in the context of migration, asylum, and border control.
Obligations for High-Risk AI Systems
The AI Act introduces mandatory fundamental rights impact assessments and conformity assessment for high-risk AI systems.
General Purpose AI
The AI Act introduces strict rules for versatile AI systems, like general-purpose AI (GPAI), and foundational models.
- Transparency is a key requirement (such as providing technical documentation, providing details about the training data and complying with European copyright laws)
- Additional requirements concern GPAI systems with high risks (such as risk assessment, model evaluation, implement cybersecurity measures and incident reporting).
- Generative AI. Users must be informed when interacting with AI chatbots.
Sanctions and Enforcement
Non-compliance with the AI Act rules can result in fines ranging from 1.5% to 7% of global turnover, depending on the infringement and company size, while caps on fines are foreseen for SMEs and start-ups.
The finalized text must now undergo formal adoption by both the European Parliament and Council to be enacted as EU law. the AI Act should apply two years after its entry into force.
Olivier Van Raemdonck, Partner
Ward Verwaeren, Senior Associate
Aida Kaloci, Associate