After nearly three years of waiting, on May 21, 2024, the European Council approved the AI Act—a regulation that addresses issues related to artificial intelligence. On August 1, 2024, 20 days after its publication in the EU Official Journal, the AI Act enters into force, making the regulation of AI in the European Union a (long-awaited) reality.
The EU Regulation on Artificial Intelligence (AI Act) is one of the first in the world to comprehensively and advancedly regulate matters related to the use and placing on the market of artificial intelligence. The new regulations should interest both technology companies and entities that distribute and use AI daily. The act aims to ensure the development and promotion of safe artificial intelligence and to minimize the risk of abuse in the use and development of this technology.
In simplified terms, the AI Act primarily concerns providers, importers, distributors, and users of artificial intelligence. It also extends to AI providers and users outside the EU if the outcomes of the AI's operations are utilized within the Union.
An AI system has been defined as "a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
The definition encompasses many ambiguous terms, which could lead to practical challenges in determining whether a particular system qualifies as artificial intelligence. As a result, the scope of the regulations may be significantly broader than it initially seems. To gain a clearer understanding of the definition, guidance will likely need to be sought from decisions made by national and EU authorities, who will interpret it in the context of ongoing proceedings.
The AI Act establishes a classification system for artificial intelligence based on the level of risk associated with its use. Properly assigning an AI system to the correct category is essential. Depending on the classification, businesses will be required to meet specific obligations to introduce, distribute, or utilize the system. The higher the level of risk, the more stringent the obligations regarding the deployment and use of AI.
For the highest-risk systems, there may even be a prohibition on their use.
Below are the categories along with examples of artificial intelligence systems:
Low (Limited) Risk Category:
Chatbots;
Systems manipulating audio/video content ("deepfake" systems);
High-Risk Category:
Some systems used in recruitment;
Employee surveillance systems;
Systems used for creditworthiness assessment;
Unacceptable Risk Category (Prohibited AI practices):
Systems that use subliminal techniques (manipulations) to modify or distort human behavior, e.g, in advertising or sales;
Systems that exploit human vulnerabilities to distort their behavior (e.g. age);
Systems that evaluate human actions (so-called scoring).
The above list provides only an illustrative overview of the classification of AI systems. Given the broad definition of artificial intelligence, businesses will need to determine each time whether their algorithms fall under the regulation and, if so, which category they belong to.
For low (limited) risk, providers, among other things, must ensure an appropriate level of transparency - individuals using these AI systems must be informed that they are interacting with artificial intelligence.
To use AI within high-risk categories, numerous requirements must be met. These include conducting a thorough analysis and assessment of anticipated risks and implementing a comprehensive risk management system associated with the AI.
For the unacceptable risk category, there are exceptions that allow the use of artificial intelligence, primarily concerning law enforcement by state authorities.
The AI Act stipulates a range of penalties for violations of the regulation.
For engaging in prohibited practices, the penalties are:
Up to 35 000 000 EUR or
Up to 7% of the total worldwide annual turnover, if the offender is an undertaking.
For violating other provisions, the penalties area:
Up to 15 000 000 EUR or
Up to 3% of the total worldwide annual turnover, if the offender is an undertaking.
Additionally, penalties are set for providing false or misleading information upon request by an authority:
Up to 7 500 000 EUR or
Each time, the upper limit of sanctions is determined based on the higher of the mentioned amounts (numerically or percentage-wise).
Given the scope of the new regulations and the associated obligations, the AI Act will be introduced gradually. The regulation will come into force on August 1, 2024 - 20 days after its publication in the EU Official Journal. What is important, not all provisions will be immediately effective—businesses and EU member states will need to adapt to the new regulation during a transitional period.
The AI Act is set to be fully applicable 24 months after its entry into force. However, there are exceptions to this general rule. Six months after the regulation enters into force, the general provisions and those concerning prohibited practices will take effect. After 12 months, another broad set of provisions, including those concerning penalties, will come into effect. The final group of provisions, relating to high-risk AI systems listed in Article 6(1) and the associated obligations, will become effective after 36 months.
As a regulation, the AI Act will be binding in its entirety and directly applicable in all EU member states, eliminating the need for additional implementation into national legal systems. State authorities, businesses, and individuals will be required to adhere to its provisions from the moment it becomes effective.
The timeframe for adapting to the initial obligations is brief—just six months. The regulations are comprehensive and necessitate a thorough analysis before being implemented within an organization. Therefore, we strongly encourage you to assess now whether and to what extent your organization will need to comply with the AI Act.
We offer comprehensive legal assistance on issues related to artificial intelligence.
We will assist with, among other things:
Counsel, attorney-at-law, PwC Poland