European Parliament approves rules on Artificial Intelligence
15/02/2021
Reports that propose the best way to regulate Artificial Intelligence (AI)
In October 2020, the European Parliament adopted three reports that propose the best way to regulate Artificial Intelligence (AI) to boost innovation and confidence in technology. The documents were the result of the work of a special commission on AI in the digital age, created to analyze the impact of AI on the European Union’s economy. The reports briefly address aspects associated with ethics in AI, civil liability regimes and intellectual property rights (IPR).
The first report¹ proposes a “regulation on ethical principles for the development, deployment and use of AI, robotics and related technologies”. The principles concern compliance with the law in force in the European Union; full respect for human dignity, autonomy and security, together with other fundamental rights; processing of personal data following the rules stipulated in the GDPR; and encouragement by the European Union and Member States of projects aimed at providing solutions that promote social inclusion, democracy, pluralism, solidarity, equity, equality and cooperation.
In this sense, the objectives of strengthening public confidence are highlighted; support companies to safely deal with current and future regulatory requirements and risks, during the innovation process and in the subsequent phase of use; organize a regulatory framework that is adequate and proportionate, encourages security and innovation, while ensuring fundamental rights and consumer protection; certify compatibility with ethical principles during the development, implementation and use of these technologies; and demand transparency.
The second report², with the recommendations regarding the civil liability regime applicable to AI, emphasizes as a premise that “there should be no excessive regulation and bureaucracy should be avoided”, so that “the rules on civil liability relating to AI should seek to strike a balance between protecting the public, on the one hand, and incentives for companies to invest in innovation, in particular in AI systems, on the other ”, thereby creating“ the greatest possible legal certainty throughout chain of responsibility, namely the producer, the operator, the injured party and any other third party ”.
Hypotheses of strict responsibility for high-risk and guilty AI systems are suggested in relation to other AI systems. In both cases, the allegation that the damage or damage was caused by an autonomous activity, device or process based on the AI system does not constitute an exclusion of liability, in contrast to the force majeure reasons that exclude operators from liability.
It is considered high risk, “an important potential of an AI system that works autonomously to cause damage or damage to one or more people at random and that goes beyond what can reasonably be expected”. The importance of this “potential” depends on the joint analysis “between the severity of any losses or damages, the degree of decision autonomy, the probability that the risk will materialize and the form and context in which the AI system is used”. It should be added that the annex to the regulation will still be defined, containing an exhaustive list of AI systems that are classified as high risk.
For these systems, the operator “has the strict responsibility for any damages or damage caused by an activity, a device or a physical or virtual process based on that AI system”, with this rule prevailing over national regimes “in case of divergent classification of the strict liability of AI systems ”.
In systems that are not classified as high risk, the operator “is subject to culpable liability for any damages or damage caused by an activity, a device or a physical or virtual process based on the AI system”. Liability is removed if the operator can prove that any damage or loss was caused without fault on his part, for example, when the AI system was activated without his knowledge, or if due diligence was observed through the execution of specific actions.
The third report³ focuses on IPR for the development of AI-related technologies. In this context, it highlights the importance “of creating legal certainty and of establishing the necessary confidence to encourage investment in these technologies”, recommending “an assessment by sector and by type of the implications of AI technologies for IPR”, considering “the degree of human intervention, the autonomy of AI, the degree of importance of the role played by data and material protected by copyright ”.
AI-assisted human creations are differentiated from AI-generated creations, underlining the need to “distinguish between IPRs for the development of AI technologies and the IPRs eventually granted to AI-generated creations”. Considers, in this perspective, that the first type of creation should be protected by IPR, while “works produced autonomously by artificial agents and robots may not be eligible for copyright protection”, due to the principle of originality associating to a person and the concept of “intellectual creation” refers to the personality of the author. However, the report admits this possibility, suggesting a horizontal approach that assigns ownership of rights to individuals or companies that have legally created the work.
The expectation is that, at the beginning of 2021, the legislative proposal of the Commission will be presented to Parliament and the Council, the body responsible for preparing the majority of the proposals for legislative acts, formed by representatives of each Member State, following the steps the ordinary legislative process.
By: Wilson Sales Belchior
¹ Available at: <https://www.europarl.europa.eu/doceo/document/A-9-2020-0186_EN.html>. Accessed on: 03 nov. 2020.
² Available at: <https://www.europarl.europa.eu/doceo/document/A-9-2020-0178_EN.html>. Accessed on: 03 nov. 2020.
³ Available at: <https://www.europarl.europa.eu/doceo/document/A-9-2020-0176_EN.html>. Accessed on: 03 nov. 2020.