Scientific Foresight (STOA) By / July 16, 2020

Mapping the AI ethics initiatives terrain

While artificial intelligence (AI) applications are numerous, AI creates novel ethical challenges that threaten both users and non-users of the technology, including exacerbating existing inequalities and generating discrimination and bias.

© Shutterstock

Written by Mihalis Kritikos,

The ethics of artificial intelligence Issues and initiatives
© Shutterstock

While artificial intelligence (AI) applications are numerous, AI creates novel ethical challenges that threaten both users and non-users of the technology, including exacerbating existing inequalities and generating discrimination and bias. As AI ethics has become a point of discussion and analysis at local and international levels, policy-makers and AI developers are facing a series of questions: What can be done to minimise harm while maximising the benefits of AI solutions? How can we develop and use AI in a human-centric and trustworthy manner? How can we make sure that there is sufficient transparency and accountability in the way algorithms function and AI is used? Are traditional ethical frameworks and human rights legal instruments sufficient to address AI-specific challenges? Governments and intergovernmental organisations are responding to these questions by drafting AI-specific ethical standards and adopting ethical guidelines and principles.

Within this frame, STOA launched a study to map the ethical terrain in the field of AI, in terms of both ethical concerns and initiatives, and analyse the current body of principles and guidelines on ethical AI. ‘The ethics of artificial intelligence: Issues and initiatives’ was published in March 2020, was carried out by Eleanor Bird, Jasmin Fox-Skelly, Nicola Jenner, Ruth Larbey, Emma Weitkamp and Alan Winfield from the Science Communication Unit at the University of the West of England. The STOA Panel requested the study, following a proposal by STOA Chair, Eva Kaili (S&D, Greece). It examines the ethical implications, dilemmas, tensions and moral questions surrounding the development and deployment of AI and maps the full spectrum of ethical standards, guidelines and strategies produced by state and non-state actors worldwide.

The study begins by mapping the main ethical dilemmas and moral questions associated with the deployment of AI. Special focus is placed on the effects of AI on citizens’ fundamental human rights within society. It explores the potential impact of AI on the labour market and economy, sheds light on how different demographic groups might be affected, and addresses questions of inequality and the risk that AI will further concentrate power and wealth in the hands of the few. The study addresses issues related to privacy, human rights and dignity, as well as risks that AI will perpetuate the biases, intended or otherwise, of existing social systems or their creators. The analysis further explores the psychological impacts of AI related to dependency and deception. It also considers the potential impacts of AI on financial and legal systems – including civil and criminal law – including risks of manipulation and collusion, and questions relating to the use of AI in criminal activities. Large-scale deployment of AI could also have both positive and negative impacts on the environment.

Τhe study then performs a scoping review, by outlining all major ethical initiatives, summarising their focus and, where possible, identifying funding sources and the harms and concerns tackled by these initiatives. By examining a wide range of initiatives, the study’s analysis reveals a growing consensus around the principles of AI accountability and auditability. Within the initiatives covered, a global convergence is emerging on the acknowledgment of the need for new standards that would detail measurable and testable levels of transparency so that systems can be objectively assessed for compliance. Particularly in situations where AI replaces human decision-making initiatives, the majority of the ethical statements adopted tend to agree that AI must be safe, trustworthy and reliable, and act with integrity. Throughout the ethical initiatives, there is also a general recognition of the need for greater public engagement and education with regard to the potential harms of AI. The initiatives suggest a range of ways in which this could be achieved. Such initiatives also pay particular attention to autonomous weapons systems, given their potential to seriously harm society.

Through the analysis of three case studies in the domains of healthcare robots, autonomous vehicles (AVs) and the use of AI in warfare and the potential for AI to be used in weapons, the authors highlight particular ethical risks associated with AI at various stages. Their analysis enriches the current ethical AI discourse through a comprehensive appraisal of the actual ethical challenges and moral dilemmas that emerge during the process of the development and deployment of AI applications. The study further discusses emerging AI ethics standards and regulations and examines all major national and international policy strategies on AI. It highlights not only the diversity and complexity of the ethical concerns arising from the development of AI, but also the variety of approaches to and understandings of ethics. The authors identify notable gaps in the context of the current AI ethics framework, including: the consideration of environmental impacts; mechanisms of fair benefit sharing; exploitation of workers; energy demands in the context of environmental and climate change; and the potential for AI-assisted financial crime.

Based on this analysis, the authors put forward a series of policy options that centre on the need for cost-benefit studies and life-cycle analyses, which include environmental externalities, for minimum acceptable reporting requirements and new retraining programmes, and for social and financial support for displaced workers. Of particular importance are suggestions: to declare that AI is not a private good, but instead should be available for the benefit of all; to focus on those most at risk of being left behind; to make worker inputs more transparent in the end-product; and to develop appropriate support structures and working conditions for precarious workers. The authors propose the development of new forms of technology assessment, placement of the burden of proof on the developer to demonstrate safety and public benefits, and creation of a single regulatory body providing prescriptive guidance to national regulators, which could help to eliminate incoherent and conflicting sets of standards and guidance.

Overall, the study provides a useful starting point for understanding the inherent diversity of current principles and guidelines for ethical AI and outlines the challenges ahead for the global community. By shedding light on under-represented ethical principles and detailing the most important similarities and differences found across the various ethical initiatives, the study can potentially help policy-makers to establish a common ground amidst a fragmented AI ethics landscape.


Read the full study and accompanying STOA Options Brief to find out more.


Related Articles

Be the first to write a comment.

Leave a Reply

Discover more from Epthinktank

Subscribe now to keep reading and get access to the full archive.

Continue reading