Document for Responsible Use of Artificial Intelligence Launched in Singapore
15/02/2021
“AI Ethics & Governance Body of Knowledge” (AI E&G BoK) was made available to the public in Singapore in October 2020
“AI Ethics & Governance Body of Knowledge” (AI E&G BoK) was made available to the public in Singapore in October 2020. It is a reference document to guide players in responsible, ethical and responsible development, governance and deployment of Artificial Intelligence (AI) systems. The initiative is part of the local government’s objective of creating a progressive, safe and reliable AI environment that benefits businesses and people, driving economic transformation.
The document is the result of collaboration between the “Singapore Computer Society”, the country’s leading private organization for industry professionals, ICT leaders and students, and the “Infocomm Media Development Authority”, a statutory council of the government of Singapore, linked to the Ministry of Communications and Information, which develops and regulates the information and media sectors.
AI E&G BoK was reviewed by a panel made up of 25 experts from industry and academia and structured as a “living document”, thus ensuring periodic updating and improvement, according to the evolution of AI technologies. It will also serve as a basis for the training and certification of professionals focused on responsible implementation of AI solutions in a course that will be launched next year in partnership with Nanyang Technological University.
The publication, in the form of a manual, is mainly aimed at AI solution providers, companies, organizations, individuals and consumers. Priority is given to building the trust of these stakeholders in AI, through guidelines that guide responsible use to manage different types of risks. In addition, it intends to encourage the alignment of the actors involved in the development of these solutions with practices of responsibility in the management and protection of data.
AI E&G BoK is divided into nine sections, which deal, in summary, with internal governance measures to incorporate values, risks and responsibilities associated with algorithmic decision making; methodologies for determining acceptable risks and identifying appropriate levels of human involvement in decision making expanded by AI; operations management, considering issues such as data protection and auditability when developing, selecting and maintaining AI models; communication strategies with an organization’s stakeholders and relationship management between them; and appendix containing local and global references regarding AI ethics.
Its importance, according to those responsible for the publication, is “to address how, as we depend more on probabilistic data-based machine learning models, we can maintain sufficient control and supervision”. To this end, it intends to answer questions that announce AI governance and regulation design in the country, such as: “How can AI be used and how should it be used? How do you make sure it doesn’t malfunction? How to continue to maintain centrality in humans throughout the life cycle of AI systems? How to ensure that the AI results are within the limits previously determined? How should AI regulation be? ”.
The assumptions that guide the responses present in AI E&G Bok relate to Singapore’s option for a voluntary approach to AI governance, which seeks a balance between supporting innovation and simultaneously maintaining public confidence in AI. This approach is supported by a basic set and standards of ethical values of particular relevance to AI: equity, explicability and transparency.
In this way, the high-level guiding principles, to promote confidence in AI and the understanding of the use of AI technologies, adopted in AI E&G Bok, refer, on the one hand, to transparency, equity and explicability; and, on the other, to the centrality in the human being.
The first advises that organizations that use AI in decision-making must ensure that this process is explainable, transparent and equitable, stressing that a reasonable effort needs to be made in order for the use and application of AI to be done in a way that reflect the objectives of these principles as much as possible. Elements that need to be present in specific processes in the value chain, such as data collection, pattern recognition, prediction, among others.
The second clarifies that AI solutions must be human-centered. The focus is on the person using or being affected by AI, so that throughout the AI lifecycle, the top priorities must be to protect the interests of human beings, including their well-being and safety. It is clear, therefore, that the core of this verification is whether the processes are equitable, transparent and explainable and whether the intention and result of these solutions is to benefit the human being.
Certainly, the regulatory and ethical debate on AI tends to grow rapidly, recognizing the potential of this new technology to accelerate economic growth, promote public benefits and stimulate the development of countries and communities, which is essential to disseminate and build trust and security in AI systems.
By: Wilson Sales Belchior