you're reading...
BLOG

Mapping the AI ethics initiatives terrain

Written by Mihalis Kritikos,

The ethics of artificial intelligence Issues and initiatives

© Shutterstock

While artificial intelligence (AI) applications are numerous, AI creates novel ethical challenges that threaten both users and non-users of the technology, including exacerbating existing inequalities and generating discrimination and bias. As AI ethics has become a point of discussion and analysis at local and international levels, policy-makers and AI developers are facing a series of questions: What can be done to minimise harm while maximising the benefits of AI solutions? How can we develop and use AI in a human-centric and trustworthy manner? How can we make sure that there is sufficient transparency and accountability in the way algorithms function and AI is used? Are traditional ethical frameworks and human rights legal instruments sufficient to address AI-specific challenges? Governments and intergovernmental organisations are responding to these questions by drafting AI-specific ethical standards and adopting ethical guidelines and principles.

Within this frame, STOA launched a study to map the ethical terrain in the field of AI, in terms of both ethical concerns and initiatives, and analyse the current body of principles and guidelines on ethical AI. ‘The ethics of artificial intelligence: Issues and initiatives’ was published in March 2020, was carried out by Eleanor Bird, Jasmin Fox-Skelly, Nicola Jenner, Ruth Larbey, Emma Weitkamp and Alan Winfield from the Science Communication Unit at the University of the West of England. The STOA Panel requested the study, following a proposal by STOA Chair, Eva Kaili (S&D, Greece). It examines the ethical implications, dilemmas, tensions and moral questions surrounding the development and deployment of AI and maps the full spectrum of ethical standards, guidelines and strategies produced by state and non-state actors worldwide.

The study begins by mapping the main ethical dilemmas and moral questions associated with the deployment of AI. Special focus is placed on the effects of AI on citizens’ fundamental human rights within society. It explores the potential impact of AI on the labour market and economy, sheds light on how different demographic groups might be affected, and addresses questions of inequality and the risk that AI will further concentrate power and wealth in the hands of the few. The study addresses issues related to privacy, human rights and dignity, as well as risks that AI will perpetuate the biases, intended or otherwise, of existing social systems or their creators. The analysis further explores the psychological impacts of AI related to dependency and deception. It also considers the potential impacts of AI on financial and legal systems – including civil and criminal law – including risks of manipulation and collusion, and questions relating to the use of AI in criminal activities. Large-scale deployment of AI could also have both positive and negative impacts on the environment.

Τhe study then performs a scoping review, by outlining all major ethical initiatives, summarising their focus and, where possible, identifying funding sources and the harms and concerns tackled by these initiatives. By examining a wide range of initiatives, the study’s analysis reveals a growing consensus around the principles of AI accountability and auditability. Within the initiatives covered, a global convergence is emerging on the acknowledgment of the need for new standards that would detail measurable and testable levels of transparency so that systems can be objectively assessed for compliance. Particularly in situations where AI replaces human decision-making initiatives, the majority of the ethical statements adopted tend to agree that AI must be safe, trustworthy and reliable, and act with integrity. Throughout the ethical initiatives, there is also a general recognition of the need for greater public engagement and education with regard to the potential harms of AI. The initiatives suggest a range of ways in which this could be achieved. Such initiatives also pay particular attention to autonomous weapons systems, given their potential to seriously harm society.

Through the analysis of three case studies in the domains of healthcare robots, autonomous vehicles (AVs) and the use of AI in warfare and the potential for AI to be used in weapons, the authors highlight particular ethical risks associated with AI at various stages. Their analysis enriches the current ethical AI discourse through a comprehensive appraisal of the actual ethical challenges and moral dilemmas that emerge during the process of the development and deployment of AI applications. The study further discusses emerging AI ethics standards and regulations and examines all major national and international policy strategies on AI. It highlights not only the diversity and complexity of the ethical concerns arising from the development of AI, but also the variety of approaches to and understandings of ethics. The authors identify notable gaps in the context of the current AI ethics framework, including: the consideration of environmental impacts; mechanisms of fair benefit sharing; exploitation of workers; energy demands in the context of environmental and climate change; and the potential for AI-assisted financial crime.

Based on this analysis, the authors put forward a series of policy options that centre on the need for cost-benefit studies and life-cycle analyses, which include environmental externalities, for minimum acceptable reporting requirements and new retraining programmes, and for social and financial support for displaced workers. Of particular importance are suggestions: to declare that AI is not a private good, but instead should be available for the benefit of all; to focus on those most at risk of being left behind; to make worker inputs more transparent in the end-product; and to develop appropriate support structures and working conditions for precarious workers. The authors propose the development of new forms of technology assessment, placement of the burden of proof on the developer to demonstrate safety and public benefits, and creation of a single regulatory body providing prescriptive guidance to national regulators, which could help to eliminate incoherent and conflicting sets of standards and guidance.

Overall, the study provides a useful starting point for understanding the inherent diversity of current principles and guidelines for ethical AI and outlines the challenges ahead for the global community. By shedding light on under-represented ethical principles and detailing the most important similarities and differences found across the various ethical initiatives, the study can potentially help policy-makers to establish a common ground amidst a fragmented AI ethics landscape.


Read the full study and accompanying STOA Options Brief to find out more.

About Scientific Foresight (STOA)

The Scientific Foresight Unit (STOA) carries out interdisciplinary research and provides strategic advice in the field of science and technology options assessment and scientific foresight. It undertakes in-depth studies and organises workshops on developments in these fields, and it hosts the European Science-Media Hub (ESMH), a platform to promote networking, training and knowledge sharing between the EP, the scientific community and the media. All this work is carried out under the guidance of the Panel for the Future of Science and Technology (STOA), composed of 27 MEPs nominated by 11 EP Committees. The STOA Panel forms an integral part of the structure of the EP.

Discussion

No comments yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Download the EPRS App

EPRS App on Google Play
EPRS App on App Store
What Europe Does For You
EU Legislation in Progress
Topical Digests
EPRS Podcasts

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,319 other followers

RSS Link to Scientific Foresight (STOA)

Disclaimer and Copyright statement

The content of all documents (and articles) contained in this blog is the sole responsibility of the author and any opinions expressed therein do not necessarily represent the official position of the European Parliament. It is addressed to the Members and staff of the EP for their parliamentary work. Reproduction and translation for non-commercial purposes are authorised, provided the source is acknowledged and the European Parliament is given prior notice and sent a copy.

For a comprehensive description of our cookie and data protection policies, please visit Terms and Conditions page.

Copyright © European Union, 2014-2019. All rights reserved.

%d bloggers like this: