(PT) Recommendations for the use of artificial intelligence in the Judiciary - RMS Advogados
RMS Advogados ×
Blog

Recommendations for the use of artificial intelligence in the Judiciary

12/02/2021

In civil, commercial and administrative law, the processing of judicial decisions can, for example, contribute with predictability in the application of the law, uniformity and consistency in the decisions of the Courts.

The European Commission for the Efficiency of Justice (CEPEJ), created in 2002, has the objective of improving the efficiency and the functioning of the Judiciary in the Member States, through the definition of instruments, metrics of evaluation and elaboration of documents. CEPEJ is responsible, among other tasks, for analyzing the results of the judicial systems, identifying the difficulties encountered by them, promoting assistance and establishing concrete means to improve the evaluation of the results and the functioning of these systems.

In this context, in December 2018, it adopted the document “European Ethical Charter on use of artificial intelligence in judicial systems”, which announces a set of principles capable of guiding public policy makers, legislators, legal professionals, public and private actors responsible for design and development of Artificial Intelligence (AI) systems involving the processing of judgments and judicial data.

This written instrument is structured according to premises that encourage the use of AI in the Judiciary, conformity with the rules of protection of human rights and protection of personal data, respect for the fundamental rights of people and the requirements of transparency, impartiality, equity, evaluation by independent experts and certified regularity.

This is explained, according to CEPEJ, by the multiplicity of uses of AI systems in the judicial space. In civil, commercial and administrative law, the processing of judicial decisions can, for example, contribute with predictability in the application of the law, uniformity and consistency in the decisions of the Courts. However, in criminal matters, the risk of discrimination linked to the use of an unrepresentative data set in addition to the processing of sensitive data, must be mitigated, in order to guarantee a fair trial.

For this reason, one of the discussions in research on AI and law is the proposal for a guideline that adequately informs the jurisdictions about the use of this technology in their case, which implies transmitting the characteristics of the decision-making process by AI, risks and consequences, so that the subject can consciously choose to authorize this use or not.

This written instrument is divided into the presentation of the principles, perception of the state of use of AI in the Judiciary of the Member States, possible applications derived from AI in the European judicial systems and glossary.

The principles are structured in five axes:

  1. Respect for fundamental rights: requirement that in the design and training stages be inserted in AI systems (applications that involve processing judicial decisions and data; conflict resolution; aid in decision-making; and guidance to the public) rules that prohibit violations direct to the scope of protection of this principle, which includes the legislation on fundamental rights and protection of personal data and the principles of independence of judges and the Democratic Rule of Law.
  2. Non-discrimination: the actors involved in the development of AI systems must guarantee the ability of data processing methods not to reproduce or aggravate prohibited discrimination, especially when based directly or indirectly on sensitive data, in conjunction with the existence of corrective measures for mitigate or neutralize these risks, in addition to raising stakeholder awareness.

The data set used for the training of AI systems needs to be sufficiently representative in terms of diverse dimensions (ethnicity, gender, political opinions, religious or philosophical beliefs, sexual orientation, among others), as well as preserving information related to this phase, in order to ensure the evaluation by independent third parties and the identification of decisions made by this type of technology.

  1. Quality and safety: recommendations for – the formation of multidisciplinary teams, involving legal professionals and researchers in the areas of law and social sciences, aiming at the production of functional models; the use of certified sources for access to court decisions, which cannot be modified until processed by the AI ​​system, providing transparency as to whether there is no change in the content or meaning of the court decision being processed; and the storage of models and algorithms in a safe environment, aiming at system integrity and intangibility.
  2. Transparency, impartiality and fairness: the objectives of this principle are related to the understanding by users of the results that are produced by AI (in clear and familiar language) and the possibility of auditing data processing methods. Therefore, it is recommended to provide subsets of information about the algorithm (nature of the services offered, tools developed, variables in use, training data, risk of error); the development
  3. Quality and safety: recommendations for – the formation of multidisciplinary teams, involving legal professionals and researchers in the areas of law and social sciences, aiming at the production of functional models; the use of certified sources for access to court decisions, which cannot be modified until processed by the AI ​​system, providing transparency as to whether there is no change in the content or meaning of the court decision being processed; and the storage of models and algorithms in a safe environment, aiming at system integrity and intangibility.
  4. Transparency, impartiality and fairness: the objectives of this principle are related to the understanding by users of the results that are produced by AI (in clear and familiar language) and the possibility of auditing data processing methods. Therefore, it is recommended to provide subsets of information about the algorithm (nature of the services offered, tools developed, variables in use, training data, risk of error); the development of mechanisms that reduce bias (discriminatory attitudes towards human beings) through more diversity in data sets and approaches; and prioritizing the interests of Justice.
  5. Under user control: refers to increasing users’ autonomy; importance of professionals in the Judiciary (possibility of reviewing judicial decisions and data used to produce results); availability of clear and understandable information (prior processing by AI before or during the judicial process, with the right of rejection; whether or not AI-based solutions are binding; different options available; rights to legal advice and access to justice); and mandatory participation of the subjects that integrate the justice systems.

In the international scenario it is noticeable the existence of AI systems with varied applications associated directly or indirectly with the justice systems: identification of probabilities of success or failure of proceedings before a Court; estimates of the amount of compensation to be received; support for judicial decisions; assessment of the chances of recurrence of a human person; advanced jurisprudential search engines; discovery of patterns in court decisions taken by individual judges and / or collegiate bodies; virtual assistants to inform jurisdictions or support them in legal proceedings; preparation of provisional statistics on the management of human and financial resources in the Judiciary.

This reality increases the importance of the regulatory debate on AI solutions applied to the Judiciary, as well as supporting the classification of possible applications derived from AI made by CEPEJ, which is divided into:

Uses to be encouraged: jurisprudence, access to justice, creation of new strategic tools.
Applications that require methodological precautions: elaboration of scales in civil litigations, support for appropriate methods of conflict resolution, “online dispute resolution”, use of algorithms in criminal investigation.
Uses that require scientific studies: analysis of the decisions of each judge, predictive analysis of judicial decisions.
Uses to be considered with the most extreme reservations: algorithms in criminal matters to create profiles of individuals, provide the synthesis of all decisions already taken as a basis to be used in future decisions.

It is expected, therefore, that the agenda of the regulatory debate will be guided by the search for a balance between innovation, broad benefits to society, protection of fundamental rights and democratic values ​​and organized through dialogue, participation and negotiation with all interested parties.

By: Wilson Sales Belchior

Source: Portal Migalhas

https://www.migalhas.com.br/depeso/330932/recomendacoes-para-o-uso-de-inteligencia-artificial-no-juditárioio

Share:

Related Post