Artificial Intelligence, OECD principles and recommendations (Part II)
In the specialized literature, investigations on the international developments of these guidelines are already found.
Continuing with the analysis of the “Recommendation of the Council on Artificial Intelligence”, a document prepared by the Organization for Economic Cooperation and Development (OECD) for the structuring of international standards aimed at Artificial Intelligence (AI), the principles and recommendations that constitute the organization’s guidelines for promoting public trust and benefits to all stakeholders.
(1) Inclusive growth, sustainable development and well-being: benefits that must be provided to human beings and the planet. The document mentions an exemplary role, including the purposes of increasing human capacities, promoting creativity and inclusion of under-represented populations, reducing inequalities (economic, social and gender) and protecting the environment.
(2) Values centered on the human being and equity: obligation to respect fundamental and human rights, democratic values, the rule of law and diversity.
It is intended to guarantee supervision and / or human intervention, whenever necessary, at some point in the subsequent availability of products and services (simultaneous or later validation, shutdown during operation and restriction of operational capacities in certain circumstances).
It is important to underline that in documents of this nature, the rights that are sought to protect are related to potential risks of new technologies. In this case, the list of rights that the document explains is linked to three main risks that may be caused by AI systems: violation of privacy (privacy and data protection), biases that matter in prohibited discrimination (non-discrimination, equity , diversity and justice) and reduction of jobs (internationally recognized labor rights).
(3) Transparency and explainability: transparency relates to the provision of meaningful information that allows users to understand when they are dealing with AI systems and not with humans. The explanability is intended to mitigate the risk of opacity, that is, the difficulty of auditing and / or checking the decision-making, forecasting or recommendation process carried out by AI systems, also relating to possible obstacles to understanding, from the perspective of human users.
Once this principle is complied with, it is possible for the authorities to verify, in a specific case, the legislation and the liability regime applicable to the results caused by decisions involving AI. Adverse affected users are allowed to judicially question the results generated by AI systems in clear and easy information.
(4) Robustness, security and protection: requirement for management and risk assessment of AI systems throughout their useful life, so that they work according to prior planning, without exposing irrational security risks. Thus, it is feasible to investigate the set of data used in training and functioning, processes and decisions made by the AI.
(5) “Accountability”: actors engaged in the development of AI systems must be held accountable in accordance with, at a minimum, these principles.
This is a challenge for national states, with regard to the attribution of a regime of responsibility for actors, applications and systems so diverse among themselves. Once again, this discussion needs to be guided by the caution inherent in the analysis of the legal and regulatory framework that currently exists in the face of new challenges and by the recognition of the speed with which this technology evolves, which cannot be accompanied by normative production or regulatory activity.
The recommendations are structured around strategic objectives that can be adopted by state actors, namely: (1) facilitate public and private investment, in order to stimulate innovation in challenging technical issues and representative data sets; (2) organize mechanisms for secure, fair, legal and ethical sharing of data and knowledge, with accessible digital infrastructure and technologies; (3) support the agile transition from the R&D stage to the implementation and operation of AI systems by reviewing, where appropriate, standards, regulations and mechanisms for checking compliance; (4) empower people to use and interact with AI systems by organizing professional training programs; (5) governments and stakeholders must actively cooperate to promote principles, knowledge sharing and standards development.
Investigations on the international developments of these guidelines are already found in the specialized literature. As initiatives to legislate and regulate AI systems emerge around the world, it will be possible to observe with more assertiveness the reflections of the document organized by the OECD, formed by the minimum content in order to ensure that reliable AI systems are developed.
By: Wilson Sales Belchior