you're reading...
BLOG, Events

Is artificial intelligence a human rights issue?

Written by Mihalis Kritikos,

STOA-LIBE workshop on 'AI and human rights', 20-03-2019 - PosterArtificial intelligence (AI) poses new risks for human rights, as diverse as non-discrimination, privacy, security, freedom of expression, freedom of association, the right to work and access to public services. The current discussion focuses on whether and how the EU could develop a human rights-based approach to AI, given that there are no established methodologies to track effects/harm on human rights, to identify who is being excluded from AI systems and to assess the potential for discrimination in the use of machine learning.

Europe has the opportunity to shape the direction of AI at least from a socio-ethical perspective. The EU’s latest initiatives indicate the desire of its main institutional actors to react swiftly to these major human rights challenges and lead the development of a human-centric AI. More specifically, the European Commission communication on artificial intelligence for Europe (April 2018), launching the EU strategy on AI, made particular reference to the need to invest in people as a cornerstone of a human-centric, inclusive approach to AI, and reaffirmed its support for research into human-AI interaction and cooperation. Recently, the Commission’s High Level Expert Group on AI proposed the first draft AI ethics guidelines to the Commission, which address values protected by the Charter of Fundamental Rights, such as privacy and personal data protection, human dignity, non-discrimination and consumer protection. The guidelines ask all stakeholders to evaluate possible effects of AI on human beings and the common good, and to ensure that AI is human-centric: AI should be developed, deployed and used with an ‘ethical purpose’, grounded in, and reflective of, fundamental rights, societal values and the ethical principles of beneficence, non-maleficence, autonomy of humans, and justice.

The recently adopted European Parliament resolution on a comprehensive European industrial policy on artificial intelligence and robotics makes explicit reference to the need for Europe to take the lead on the global stage by deploying only ethically embedded AI. It recommends that the Member States establish AI ethics monitoring and oversight bodies and encourage companies developing AI to set up ethics boards and draw up ethical guidelines for their AI developers, and requests an ethics-by-design approach that will facilitate the embedding of values such as transparency and explainability in the development of AI. The resolution points out that the guiding ethical framework should be based on the principles and values enshrined in the Charter of Fundamental Rights, as well as on existing ethical practices and codes.


Are these initiatives sufficient in terms of safeguarding a human rights lens in the governance of AI? Do we need legally-binding norms in this field rather than soft-law instruments or even the development of new human rights? Should the EU legislators consider the need to integrate a requirement for systematic human rights impact assessments or even for developing new legal mechanisms for redress/remedy for human rights violations resulting from AI?

The Panel for the Future of Science and Technology (STOA) and Parliament’s Committee on Civil Liberties, Justice and Home Affairs (LIBE) are organising a workshop entitled ‘Is artificial intelligence a human rights issue?’ to discuss and evaluate the efficiency and adequacy of these EU-wide initiatives from a human rights’ perspective. This will be an opportunity to learn more about the effects of AI upon the protection of human rights, and to participate in a debate with key experts in the subject. The workshop will open with a welcome address from STOA Chair Eva Kaili (S&D, Greece), and a keynote speech by Professor Jason M. Schultz, from the NYU School of Law and former Senior Advisor on Innovation and Intellectual Property to the White House, and author (along with Aaron Perzanowski) of ‘The End of Ownership: Personal Property in the Digital Economy‘.

A welcome address from STOA Chair Eva Kaili (S&D, Greece) will be followed with a keynote speech by Professor Jason M. Schultz of the NYU School of Law, former Senior Advisor on Innovation and Intellectual Property to the White House, and author (along with Aaron Perzanowski) of a well-known book on The End of Ownership: Personal Property in the Digital Economy. This will be followed by three panel discussions, including presentations from a wide range of experts.

The first panel includes presentations from Ekkehard Ernst, Chief Macroeconomist, Research Department, ILO, Joanna Goodey, Head of Unit, European Union Agency for Fundamental Rights and Dimitris Panopoulos, Suite 5. Panel 1 will be moderated by STOA Chair, Eva Kaili,

Joining Panel 2 will be Silkie Carlo, Chief Executive of Big Brother Watch, Lorena Jaume-Palasi, founder of the Ethical Tech Society and Lofred Madzou, Project Lead, AI & Machine Learning, World Economic Forum. This panel is moderated by Marietje Schaake (ALDE, the Netherlands).

Panel 3 includes Can Yeginsu, Barrister, 4 New Square Chambers, Professor Aimee van Wynsberghe, TU Delft-Member of the High-Level Expert Group on AI and Fanny Hidvegi, Access Now, Member of the High-Level Expert Group on AI, and is moderated by Michał BONI, (EPP, Poland), who will also moderate the Q&A discussion and debate and make the closing remarks.

Interested in joining the workshop? Watch the live webstream on the STOA event page.

About Scientific Foresight (STOA)

The Scientific Foresight Unit (STOA) carries out interdisciplinary research and provides strategic advice in the field of science and technology options assessment and scientific foresight. It undertakes in-depth studies and organises workshops on developments in these fields, and it hosts the European Science-Media Hub (ESMH), a platform to promote networking, training and knowledge sharing between the EP, the scientific community and the media. All this work is carried out under the guidance of the Panel for the Future of Science and Technology (STOA), composed of 27 MEPs nominated by 11 EP Committees. The STOA Panel forms an integral part of the structure of the EP.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Download the EPRS App

EPRS App on Google Play
EPRS App on App Store
What Europe Does For You
EU Legislation in Progress
Topical Digests
EPRS Podcasts

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,504 other followers

RSS Link to Scientific Foresight (STOA)

Disclaimer and Copyright statement

The content of all documents (and articles) contained in this blog is the sole responsibility of the author and any opinions expressed therein do not necessarily represent the official position of the European Parliament. It is addressed to the Members and staff of the EP for their parliamentary work. Reproduction and translation for non-commercial purposes are authorised, provided the source is acknowledged and the European Parliament is given prior notice and sent a copy.

For a comprehensive description of our cookie and data protection policies, please visit Terms and Conditions page.

Copyright © European Union, 2014-2019. All rights reserved.

<span>%d</span> bloggers like this: