Request a Demo

The EU Artificial Intelligence Act (AI Act)

On December 8, 2023, the European Parliament reached a political agreement on the Artificial Intelligence Act (AI Act), making it the world’s first legal framework for AI. Read this article for a high-level overview of the AI Act.

Key visual with the title: The EU Artificial Intelligence Act (AI Act)

What is the Artificial Intelligence Act (AI Act)?

The Artificial Intelligence Act (AI Act) is the world’s first comprehensive regulation on AI technology. After two and a half years of technical and political negotiations, the European Parliament reached a landmark political agreement on the AI Act on December 8, 2023. 

The AI Act was initially proposed on April 21, 2021 to ensure the ethical use and development of AI as it continues to develop and grow. While the text has not been finalized yet, as it will go through multiple stages of formal approval over the next few months, it does establish substantial guidelines on how AI systems should be regulated and developed in a trustworthy manner. 

Here’s a high-level overview of what they are:

Risk-based approach

In response to the potential risks and harms that AI technology can cause, the AI Act categorizes the risk of AI systems based on four different levels. Each risk level corresponds to the amount of oversight required by the law.

The four levels ranked from highest to lowest are: unacceptable risk, high risk, limited risk, and minimal risk.

Unacceptable risk

AI systems are categorized as unacceptable risks or “banned practices” if they pose dangers to citizens’ health, safety, or fundamental rights. 

The AI Act specifies the prohibited uses of AI in the EU, including: 

  • Biometric categorization systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race);
  • Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
  • Emotion recognition in the workplace and educational institutions;
  • Social scoring based on social behavior or personal characteristics;
  • AI systems that manipulate human behavior to circumvent their free will;
  • AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).

As these AI systems exploit and manipulate human vulnerabilities, MEPs have jointly agreed that the application of AI systems for these purposes is strictly forbidden in the EU.

High-risk

High-risk AI systems are considered systems that can potentially cause significant harm or impact fundamental rights. Examples of high-risk systems include AI being used in law enforcement, healthcare, and transportation.

AI systems categorized as high-risk must follow stricter guidelines and compliance requirements. For example, high-risk systems must undergo fundamental rights impact assessments before it is deployed into the market.

Limited risk

AI systems that carry limited risk means they can potentially cause a moderate level of harm on humans or their fundamental rights. Although limited risk systems adhere to fewer restrictions, they still must follow appropriate standards and guidelines.

Minimal risk

Minimal risk AI systems are AI systems that carry low potential for causing harm on humans or affecting their fundamental rights. Minimal risk systems require the least amount of regulations and oversight than their higher risk counterparts. 

Transparency

The AI Act also places heavy emphasis on transparency. For example, developers of AI systems must maintain extensive documentation on its systems’ development and deployment lifecycle, from how it is trained to how it makes decisions. 

Transparency requirements don’t only apply to higher-risk AI systems, either. General-purpose AI (GPAI) systems, like Chat GPT, also are subject to the AI Act’s transparency requirements. While posing relatively less risk and harm to citizens, these AI systems, which can learn, adapt, and perform a variety of tasks, may also need to undergo risk assessments before deployment. 

In addition, the AI Act will require users to be notified when they are using or seeing the output of AI systems. Users must be made aware that they are interacting with a chatbot or an emotion recognition system. Deep fakes and other visual AI-generated content must also to be labeled as such. 

By understanding the traceability of AI systems, i.e. what data or content it’s trained with, how, and whether an outcome was generated from AI, users, developers, organizations, and regulators alike can evaluate the risk level of the system, as well as identify any of its potential biases or errors.

Data quality and data governance 

To enforce the quality of AI technology, the AI Act mandates that the data used to train AI systems be accurate and up-to-date. It also encourages developers to use diverse datasets to eliminate potential biases and discriminatory decision-making. 

In terms of data governance, the AI Act requires developers to uphold the data privacy standards of the General Data Protection Regulation (GDPR). The data used to train AI systems must be in compliance with GDPR rules and protect EU citizens’ personal information.

Accountability

Accountability in AI systems is also emphasized in the AI Act. The framework addresses accountability in various ways, including the roles and responsibilities that organizations, developers, users, and regulatory bodies have in relation to AI systems. 

The AI Act also outlines a structure of holding different parties accountable for AI-related harms caused by their decisions and outcomes. To mitigate these risks, the AI Act established an oversight structure on responsible parties for monitoring and ensuring compliance. More details are to come in the coming months as the text is finalized.

Violations and enforcement

Depending on the risk-level of the AI system, organizations that violate the AI Act can face fines ranging from €7.5 million or 1.5% of global turnover to €35 million or 7% of global turnover.

To enforce the AI Act, the European Commission will create a new AI Office to oversee the implementation and enforcement across the EU. An individual panel of scientific experts will also advise the AI Office by providing further assistance on evaluating the risks of AI systems.

The AI Act’s impact on future AI regulation

As the world’s first legal framework for AI technologies, the EU remains at the forefront of setting global standards for privacy legislation. Like with the GDPR, the EU has established what effective legislation for AI systems looks like for the rest of the world to follow. 

With many other governments deciding on AI regulations, like the U.S., UK, Brazil, and China, the EU AI Act will significantly influence how these regulations look country by country.  

As the text of the AI Act gets finalized in the coming months, Ethyca is here to keep you up to date on the latest developments. If you have any questions about the AI Act or future AI regulations, schedule a free 15-minute call to speak with one of our privacy experts today.

Ready to get started?

Our team of data privacy devotees would love to show you how Ethyca helps engineers deploy CCPA, GDPR, and LGPD privacy compliance deep into business systems. Let’s chat!

Request a Demo