22.08.2024 Intellectual property

The AI Act comes into effect


On 1 August, one of the most eagerly awaited pieces of European legislation came into force – the AI Act. After almost three years of work, the European Union has finally seen the world’s first comprehensive legislation on artificial intelligence. The law takes the form of a regulation, which means that its provisions are directly applicable in all Member States.

There is no doubt that the AI Act is the world’s first such comprehensive piece of legislation regulating issues related to artificial intelligence (AI) systems. The new regulations cover issues such as the placing on the market or use of systems based on artificial intelligence. The purpose of the new regulation1 is to ensure a safe legal environment for the development and promotion of artificial intelligence and to reduce (minimise) the risk of possible abuses that may occur at the stage of use or development of AI.

What is artificial intelligence?

According to the definition in the Regulation, an AI system means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

As you can see, the definition in the AI Act contains a number of vague phrases that can potentially cause some difficulty in determining whether a particular solution should be considered AI or not. Phrases such as “varying levels of autonomy” and “may exhibit adaptiveness” carry a certain risk (or perhaps possibility) of a broader application of the regulation than might appear at first glance. In theory, a system that is characterised by a minimal level of autonomy, requires a significant amount of human input or involvement, and lacks the characteristics of ‘machine learning’ could also be considered an AI system.

However, as is often the case, the actual scope of this definition will be determined by practice, the case law of European and national courts and the decisions of the competent authorities.

Who should be interested in the AI Act

According to the terms of the new regulation, it will apply to a fairly wide range of entities, including suppliers, importers, distributors and users of AI systems, and in the case of providers, the AI Act will also apply to entities outside the EU if the results produced by the AI system are used in the EU.

Article 2(3) contains exemptions from the application of the Regulation. For example, the AI Act will not apply to AI systems used exclusively for military, defence or national security purposes, or to systems that have not been placed on the market or put into service in the EU and the results of which are used in the EU exclusively for the above purposes.

Risk classification – the heart of the regulation

The most important area regulated by the new regulation and its backbone is the classification of artificial intelligence systems according to the level of risk associated with their use. This is important because, depending on the risk class in which a particular AI system is classified, the obligations and responsibilities of entrepreneurs will vary. To put it simply, the higher the level of risk, the more obligations the entrepreneur will have to fulfil in order to market or use an artificial intelligence system.

The AI Act distinguishes four categories of AI systems based on the risks they may pose to fundamental rights and freedoms. These are:

  1. low (minimal) risk,including in particular video games with AI and spam filters;
  2. reduced risk category – AI systems that require providers to be transparent and inform users about their interactions with the AI, such as chatbots and deepfake systems;
  3. high-risk category – AI systems used in key areas for life and health, such as medicine, transport, employment, which may have negative impacts on security and fundamental rights, e.g. biometric identification systems, systems related to: management and operation of critical infrastructure, education and training (recruitment and assessment systems), employment and human resources, access to services (credit scoring systems), law enforcement (systems to assess a person’s risk of committing a crime, to assess emotional states, to detect deep fakes, to assess the value of evidence), designed for migration management, etc.; systems in this category must comply with strict requirements, including transparency and oversight;
  4. the category of unacceptable risk – systems banned because of potentially dangerous applications; systems with the potential to manipulate people, based on subliminal techniques acting on the subconscious or exploiting the weaknesses of certain groups, such as children or people with disabilities; systems for remote biometric identification in real time in public spaces for law enforcement purposes; or social scoring systems.

High-risk systems

Not surprisingly, a large part of the provisions of the new Regulation is devoted to high-risk AI systems, which are classified in detail in Annex III of the Regulation. The purpose of such detailed regulation is to ensure that high-risk AI systems can be developed, marketed and used in a safe and responsible manner.

Because of the potential risk that such systems may pose, the Regulation sets out a number of requirements that AI systems classified as high-risk must meet in order to be placed on the market.

In particular, such systems must be subject to a rigorous conformity assessment process that includes both technical testing and audits of compliance with legal and ethical requirements. It is also necessary to ensure that such systems operate in a transparent manner.

Once placed on the market, the performance of high-risk systems must be continuously monitored to identify and minimise potential risks and any incidents or irregularities must be reported to the regulatory authorities.

It is worth noting that for high-risk systems, the Regulation introduced the obligation to register them in a special database, to draw up a declaration of conformity and to obtain the CE marking.

New obligations for entrepreneurs

As I mentioned earlier, the new regulation is particularly aimed at entrepreneurs involved in the process of developing, marketing and using artificial intelligence systems. The AI Act imposes a number of obligations on different categories of entrepreneurs in the supply chain.

Suppliers who play a key role in the development of AI systems or models are responsible for ensuring that the system complies with the requirements of the Regulation. Suppliers must carry out risk assessments, provide adequate documentation of their systems and ensure that their systems are secure and transparent. For high-risk systems, suppliers will also be responsible for carrying out regular compliance audits and updating the systems being developed.

Distributors and importers, on the other hand, are responsible for ensuring that the AI-based systems they place on the market meet all legal requirements. In the event of non-compliance, these operators can be held liable in a similar way to suppliers.

Users of AI systems, on the other hand, have an obligation to monitor the systems they use and report any problems with their operation. Users will also have to follow suppliers’ guidelines and ensure that the systems are used for their intended purpose.

Risk Management System

The provisions of the Regulation require the establishment and implementation of a risk management system for high-risk AI systems. This system is understood as a continuous, iterative (repeatable) process carried out throughout the life cycle of the system, including:

  • identification and analysis of the known and the foreseeable risks,
  • evaluation of other risks possibly arising, based on the analysis of data gathered from the post-market monitoring system,
  • adoption of appropriate risk management measures.

Risk management measures are primarily intended to ensure the elimination or mitigation of identified and assessed risks through the appropriate design of the AI system and its development process, or the implementation of appropriate measures to mitigate and control risks that cannot be eliminated.

Sanctions

The new regulation introduces severe sanctions for entrepreneurs who violate its provisions. For the most serious infringements, such as the use of prohibited artificial intelligence systems, the fines provided for in the AI Act amount to EUR 35,000,000 or 7% of the entrepreneur’s annual worldwide turnover, whichever is higher.

Other breaches (e.g. lack of security, mismanagement of data) are punishable by a fine of EUR 15,000,000 or 3% of worldwide annual turnover. On the other hand, providing false or misleading information is punishable by a fine of EUR 7,500,000 or 1% of worldwide annual turnover.

AI and the market

A reading of the AI Act suggests that the EU does not intend to over-regulate artificial intelligence and interfere in the development of the market for artificial intelligence systems. One gets the impression that the scope of the regulations proposed in the draft regulation is quite narrow in terms of market issues.

The EU appears to be promoting the responsible development of artificial intelligence by incentivising innovators and start-ups by giving them access to so-called regulatory sandboxes set up by member state authorities. These sandboxes will provide an environment for the development, training, testing and validation of innovative AI systems under the supervision and support of national authorities. The sandboxes will also allow AI systems to be tested in real-world conditions – under appropriate supervision, of course.

Entry into force of the AI Act

Given the extended scope of the new regulation, it will be implemented in stages.

As I mentioned earlier, the new regulation officially came into force on 1 August 2024. However, most of its provisions will not apply until August 2026 (24 months after its entry into force). However, the AI Act provides for exceptions to this rule. Before that, in February 2025, general regulations and regulations on prohibited and unacceptable practices of AI systems will come into force. From August 2025, the provisions of Chapters III (Section 4), V, VII and XII, i.e. the provisions on notifying authorities and notified bodies, general AI models, the European Artificial Intelligence Board and sanctions, will apply.

In August 2027, the last group of regulations based on Article 6(1) of the Regulation, i.e. provisions relating to high-risk AI systems, will enter into force.

This staggered introduction of the rules is intended to allow both administrative authorities and entrepreneurs working with AI systems to prepare properly for their application.

What can we do to help?

Our law firm provides comprehensive legal advice on compliance with the AI Act and on artificial intelligence in general.

Our services in this area include

  • Assessing the impact of the AI Act on the entrepreneur
  • Advice on the solutions that an entrepreneur should implement in order to operate in compliance with the law
  • Analysis of the legal risks associated with the current business in relation to AI systems
  • Providing training and workshops

[1] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)


Piotr Dudek Director of the New Technologies, Defence & Aerospace Department, Advocate
TGC Corporate Lawyers
Want to stay up to date?
Subscribe to our newsletter!
Full version

TGC Corporate Lawyers

ul. Hrubieszowska 2
01-209 Warszawa
Polska

+48 22 295 33 00
contact@tgc.eu

NIP: 525-22-71-480, KRS: 0000167447,
REGON: 01551820200000. Sąd Rejonowy dla
m.st. Warszawy, XII Wydział Gospodarczy

Mapa