Regulation of Artificial Intelligence in Europe – What’s in the pipeline?

von am 2. December 2020

Shortly after her inauguration, the new European Commission’s president, Ursula von der Leyen, expressed the intent of the new commission to come up with a Digital Strategy addressing Artificial Intelligence regulation within 100 days from beginning of her presidency. Keeping this promise, the European Commission published a first White Paper on Artificial Intelligence – A European approach to excellence and trust in February 2020. Statements on such White Paper were collected until 31 May 2020. Together with the White Paper the Commission Report on safety and liability implications of AI, the Internet of Things and Robotics was published, providing more details on the gaps the Commission has identified in existing laws.

What are we talking about?
A definition of Artificial Intelligence was provided by the European Commission’s AI High Level Expert Group (‘AI HLEG’) on 8 April 2019, when this group provided Ethics guidelines for trustworthy AI. The Commission’s papers are referring to AI HLEG, which defines AI as follows:

Artificial intelligence (AI) systems are software (and possibility also hardware) systems

·     designed by humans that,
·     given a complex goal,
·     act in the physical or digital dimension by
·     perceiving their environment through data acquisition,
·     interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and
·     deciding the best action(s ) to take to achieve the given goal.

These defining aspect of AI form the basis of an evaluation of legal gaps and possible requirements for new regulation.

Where are the legal gaps?
The Commission Report on safety and liability implications of AI, the Internet of Things and Robotics identified legal gaps mainly in the following respect:

·      enhanced security risks due to connectivity and openness of AI systems;
·      data dependency of AI actions and therefore the need of neural and accurate data for AI training;
·      a certain autonomy of AI decisions;
·      opacity of operating systems;
·      complexity of products, systems and of value chains;
·      gaps in product liability laws;
·      general fault-based liability rules, which don‘t fit autonomously deciding AI systems.

The European Parliament published a further Report on Intellectual property rights for the development of artificial intelligence technologies, evaluating the status quo and identifying various gaps in IP law. The report found, for example, gaps in respect to the question whether AI is or can be protected by IP, whether IP protected content can be ‘food’ for AI training and whether someone owns rights to works created by AI.

What kind of regulation do we have to expect?
Upon identification of the various gaps, the EU intends to issue a comprehensive legislative package on AI, which will include new regulations for those who build and deploy AI. First hints on what could be part of such package can be taken from three resolutions the European parliament adopted on 20 October: the Framework of ethical aspects of artificial intelligence, robotics and related technologies; the Civil liability regime for artificial intelligence and the Intellectual property rights for the development of artificial intelligence technologies.

Looking at these Resolution, the following topics might be seen as key to build an ecosystem of trust and enhance the general social acceptance of AI:

·      explainability of AI functioning and decision making
·      specific requirements for high-risk applications, especially in sectors in which significant risks can be expected to occur, e.g. healthcare, energy, transport or regarding significant risk for sensible fundamental rights, e.g. biometric identification
·      ex-ante conformity assessment, labelling schemes
·      external regulatory audits and information for competent authorities
·      information of users about interaction with AI
·      human contact-point for redress systems
·      technical robustness and accuracy of AI
·      quality, origin and neutrality of training data
·      specific data protection requirements
·      data and record-keeping to reduce opacity
·      accountability/Liability/burden of proof
·      human control mechanism and risk assessment

When?
A first draft of such new AI legal framework is expected for the first quarter of 2021, whereas some parts could already be reflected in the Digital Services Act, for which a first draft is already expected in December 2020.

Who?
Addressee of such obligations might not always be the software developer, but could be the actor who is best placed to address potential risks. Obligations could therefore also be imposed on the deployer or the service provider. And the obligations will have to be obeyed by all economic operators providing AI-enabled products or services in the EU, regardless of whether they are established in the EU or not.

What to do?
Anyone engaged or interested in AI should monitor closely the developments of the upcoming months, as the new laws will most certainly have impact on AI systems that are currently trained or are even already on the market.

In practice, contractual frameworks for AI systems should therefore already be examined and might include clauses anticipating future developments. Certain upcoming liability risks might also already be taken into account.

Dr. Ursula Feindor-Schmidt, LL.M. was a speaker “Regulation of Artificial Intelligence in Europe – What’s in the pipeline?” at the conference Rise of AI 2020. This article is a short summary of her speech.