AI & Law #1: the EU AI Act unraveled

In this first article of our AI Blogpost series “AI and the Law: A Journey of Innovation”, we focus on the recent developments regarding the EU Artificial Intelligence Act (AI Act). While the proposed legislation has not yet been adopted, it has received the green light from the European Parliament, marking a crucial milestone in the regulation of the use of artificial intelligence in the EU. In this article, we explore the key aspects of this important legal framework and its implications for the future of AI.

Who would be subject to the AI Act?

The AI Act applies to three key categories (i) providers who place AI systems in the Union, (ii) users of AI systems located in the Union and (iii) providers and users of AI systems that are located outside the Union, whose AI system outputs are used within the Union.

What is the aim of the AI Act?

The draft AI Act aims to create unified regulations for AI systems within the EU, taking a risk-based approach. It seeks to prohibit AI practices that pose unacceptable risks to individuals’ health and safety, while imposing specific requirements for high-risk AI systems. The legislation also introduces transparency rules for AI systems interacting with people. Additionally, the AI Act establishes mandatory obligations for operators, including pre-market assessments and post-market monitoring plans that should be carried out by the provider.

Non-compliance may result in significant fines, and governance systems at both national and EU levels will be implemented through the European Artificial Intelligence Board.

What qualifies as AI system?

In a similar fashion to other EU tech legislations, the proposed act provides a definition of AI system that aims to be technology neutral, Accordingly, AI systems are defined as software that, based on a set of human-defined objectives, produces outputs that impact the environments it interacts with. These outputs are generated using techniques such as machine learning, statistical analysis, logic-based reasoning, or knowledge-based approaches.

Adopting a risk-based approach

The rules of the AI Act follow a risk-based approach and impose obligations depending on the level of the risk posed by the AI system.

  • Unacceptable risk:

The use of AI systems that pose an unacceptable level of risk are strictly prohibited. This would include any AI system that poses a threat to EU values or violating fundamental rights and which:

  • deploy harmful manipulative subliminal techniques;
  • exploit specific vulnerable groups, such as those with physical or mental disabilities;
  • are used by public authorities, or on their behalf, for social scoring purposes;
  • utilize real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes, except in a limited number of cases;

An example would be an AI system for real-time biometric identification (like facial recognition) in public spaces.

  • High risk:

AI systems that present a significant risk to the health and safety or fundamental rights of individuals are allowed in the European Union, subject to compliance with mandatory safety requirements and pre-market conformity assessments. These AI systems will be divided in two main categories (i) AI systems intended to be used as safety component of products or falling under EU health and safety harmonization legislation (e.g. toys, aviation, cars, medical devices, elevators), and (ii) other stand-alone AI systems deployed in different areas as listed in the AI Act (e.g. as critical infrastructures, law enforcement, education, migration, administration of justice).

An example would be an AI system used in employment, for the recruitment and selection of persons, making decisions on promotion and termination or for task allocation, monitoring or evaluation of persons in work-related contractual relationships.

  • Limited risk:

AI systems falling under this category are required to meet minimum transparency requirements (e.g. flag the use of an AI system) when (i) interacting with humans, (ii) detecting emotions or determining association with (social) categories based on biometric data, or (iii) generating or manipulating content.

In the aforementioned circumstances, the providers need to disclose that the content is generated through automated means, enabling users to make informed decisions about whether or not to use the AI system.

An example would be an AI system used in a customer service (i.e. a chatbot) that interacts with users.

  • Low or minimal risk:

AI systems that only pose a low or minimal risk should not be subject to any additional legal obligations, however providers of low-risk AI system will be encouraged to adopt voluntary codes of conduct.

Mandatory safety requirements and conformity assessment of a high-risk AI system

High-risk AI systems must undergo an approved conformity assessment and consistently adhere to the AI requirements outlined in the EU AI Act. More specifically, providers of high-risks AI systems are required to:

  • implement a risk management system to identify and manage potential risks;
  • use appropriate and well-governed data for training AI models;
  • create technical documentation to demonstrate compliance with the AI Act;
  • keep automatic records of AI system events (“logs”) for traceability;
  • ensure transparency and provide information to users about the system’s output;
  • make sure that the AI system is overseen by natural persons to prevent risks to health, safety, or fundamental rights. Human oversight can (i) either be identified and built into the AI system by the provider before it is placed on the market or put into service or (ii) can be identified before it is placed on the market or put into service by the provider and can be implemented by the user; and
  • maintain accuracy, robustness, and cybersecurity with technical solutions (measures may include back-up or fail-safe plans and measures to prevent and control attacks).

Once compliant, the AI system receives a visible CE marking, allowing it to move freely within the internal market.

What are the consequences of non-compliance? 

In the event of non-compliance with the requirements of the AI Act member states can impose administrative fines up to 30 000 000 EUR or up to 6 % of its global annual turnover, whichever is higher.

Looking ahead

With the approval of the European Parliament negotiating position on the AI Act, the draft will now proceed to the final phase of inter-institutional negotiations, during which communication will take place between the Council of the European Union, the European Parliament, and the European Commission. The goal of the trilogue is to reach a common ground and approve a final version of the AI Act which will then be subject to a two-year implementation period.

The proposed AI Act’s complexity may be challenging for companies deploying AI tools in the EU, especially considering its impact on other sectors and interplay with other EU legislations. Cresco is closely monitoring the developments surroundings the AI Act and is prepared to assist clients staying ahead of the regulatory changes. Stay tuned for more insights on in our upcoming articles.

For any specific concerns about compliance with the AI Act or if you wish to discuss anything covered in this article, don’t hesitate to contact us.

Ward Verwaeren, Senior Associate

Aida Kaloci, Associate

Emilie Van Heck, Associate

 

Team

Ward Verwaeren
Managing Associate
Aida Kaloci
Associate
Olivier Van Raemdonck
Managing Partner

Expertises

innovation
IP
privacy & security