AI Act approved

by on 2. February 2024

The time has come: the AI Act is becoming a reality. After a tough struggle, everyone involved in the legislative process was ultimately convinced: AI with the AI Act is better than AI without the AI Act. After all, it is better to continue working with (more or less) concrete regulations than with further uncertainty – at least that is the hope of many.

As the final steps are taken by the EU legislator the baton for the AI Act is now being passed to those who will fall under the new regulations and who will be able to work with the formulations that have been discussed back and forth until recently but have now been finalized.

Here is a first brief overview of what it is about, who it affects, when it applies, what to do and how high the fines are.

The principle: The law adopts a risk-based approach: the more risk a system poses, the more requirements it has to meet.

Who is (mainly) affected by this new law?

  • Providers of AI systems that fall under one of the risk categories if the system or its output is used in the EU;
  • Deployers of AI systems that fall under one of the risk categories;
  • Providers of General purpose AI (GPAI) models (with and without systemic risk)
  • Providers of GPAI systems (insofar as these are themselves high-risk or are used as components in high-risk systems)

There will be transition periods for providers and deployers to adapt to the new requirements:

  • 6 months for prohibited AI systems
  • 12 months for GPAI models
  • 24 months for high-risk AI systems in accordance with Annex III of the AI Act
  • 36 months for high-risk AI systems in already regulated areas according to Annex II of the AI Act

For the various players, this means in particular:

Prohibited AI systems must therefore be withdrawn from the European market relatively quickly, i.e. within 6 months. This applies above all to systems that lead to manipulation, discrimination and other serious infringements of fundamental rights, in particular due to the automized categorization of individuals (e.g. social scoring, emotion recognition in the workplace and in education, biometric categorization, etc.).

Providers of GPAI models must draw up technical documentation within 12 months at the latest, including in particular the training and testing processes. This documentation must be made available to providers of AI systems that integrate GPAI models. The aim is to create greater transparency in the value chain regarding the database of a GPAI model used and the associated risks. This is the only way for providers of AI systems that integrate GPAI models to have a meaningful overview of their own risks and potential liability and, if necessary, limit them. In addition, GPAI models must provide summaries of their training content and ensure that measures are taken to comply with copyrights and, in particular, with regard to effectively declared reservations of rights to a commercial text and data mining.

For rightsholders, the full and effective reservation of rights with regard to text and data mining will thus become even more important if they do not want GPAI models to be trained with their works.

For GPAI model providers with a systemic risk, there are some additional requirements, such as model evaluations, identification and documentation of risks and incidents and an adequate level of cybersecurity.

GPAI model providers under open source licenses, on the other hand, only have to comply with the transparency and copyright requirements (provided the models do not entail any systemic risk).

For the transitional period, GPI model providers can measure themselves against codes of practice compiled together with the AI Office (a newly established authority) in order to be able to demonstrate conformity to their customers. A first code of practice has to be available within 9 months.

Providers of generative AI have to ensure that AI outputs are marked in a machine-readable format and are detectable as AI generated. Deployers of generative AI for informative text, that is published, have to disclose that the text has been generated by AI.

Providers of high-risk AI systems in accordance with Annex III, such as non-prohibited biometric systems, critical infrastructure, education and vocational training, employment or law enforcement, have a significantly higher range of requirements and are therefore granted a transitional period of 24 months. In particular, the providers of these systems must introduce a risk management system, operate data governance, draw up technical documentation and record keeping and provide instructions for use. Above all, however, these systems must be subject to human oversight, achieve an appropriate level of accuracy, robustness and cybersecurity and establish a quality management system.

With regard to Annex III systems, there are now new possibilities for “self-exclusion”. Companies can independently review their systems and come to the conclusion that their system does not fall under the regulations, however, they have to register their systems and their assessment can be reviewed.

Certain providers of high-risk systems must also weigh up the consequences of fundamental rights.

However, providers of high-risk AI systems whose products are already covered by existing EU regulations under Annex II or in which high-risk AI is integrated as a safety component will be given more time and will only have to comply with the newly regulated requirements within 36 months.

If the AI systems regulated by the AI Regulation do not meet the requirements, there is a risk of severe penalties:

  • EUR 35 million or 7% of its total worldwide annual turnover in the event of violations of prohibited uses;
  • EUR 15 million or 3% of its total worldwide annual turnover for other non-compliance with requirements or obligations;
  • EUR 7.5 million or 1.5% of its total worldwide annual turnover in the event of incorrect, incomplete or misleading information to notified bodies and national competent authorities.

As a first step, it is therefore advisable for all companies that provide or deploy AI systems are checking now exactly which category they will fall into and which requirements will apply to them from which exact point in time.