Scientific Foresight (STOA) By / April 16, 2019

Does artificial intelligence threaten human rights?

Asking ‘Is artificial intelligence a human rights issue?’, the workshop co-organised by STOA with the European Parliament’s Committee on Civil Liberties, Justice and Home Affairs (LIBE) on 20 March 2019, gathered academic experts, non-governmental organisations (NGOs), practitioners and representatives of international organisations to share their perspectives on exactly how artificial intelligence (AI) affects the protection and enjoyment of human rights.

Written by Mihalis Kritikos,

Does artificial intelligence threaten human rights?Asking ‘Is artificial intelligence a human rights issue?’, the workshop co-organised by STOA with the European Parliament’s Committee on Civil Liberties, Justice and Home Affairs (LIBE) on 20 March 2019, gathered academic experts, non-governmental organisations (NGOs), practitioners and representatives of international organisations to share their perspectives on exactly how artificial intelligence (AI) affects the protection and enjoyment of human rights. Despite the speakers’ diverse experiences, there was a consensus that AI poses a wide range of new risks for human rights that need to be addressed immediately. The panellists also agreed that there are no established methodologies yet to track the effects on human rights and to assess the potential for discrimination in the use of machine learning.

The workshop opened with a welcome address from STOA Chair Eva Kaili (S&D, Greece), who highlighted that the workshop followed up on the recently adopted European Parliament resolution on a comprehensive European industrial policy on artificial intelligence and robotics, and that its conclusions should feed into efforts to shape a socio-ethical framework for a human-centric approach to AI. She also emphasised the need to assess the capacity of the current universal human rights and EU ethical frameworks, to confront emerging governance challenges when it comes to the deployment and application of AI, and argued that Europe has the opportunity to shape the direction of AI, at least from a socio-ethical perspective.

Following the Chair’s remarks, the first panel kicked-off with Ekkehard Ernst, Chief Macroeconomist at the International Labour Organization analysing the four AI inequality challenges. He argued that AI intellectual property rights should be addressed as a way to decrease the unsustainable concentration of data and AI development in the hands of a few mega-corporations. Representing the European Union Agency for Fundamental Rights (FRA), Joanna Goodey, Head of the Research and Data Unit, presented the work of FRA in this field, highlighting that the protection of human dignity should be prioritised and arguing that use of AI in the area of law enforcement can lead to discrimination. Dimitris Panopoulos, Co-founder of Suite5 Data Intelligence Solutions, presented the initial findings of the EU-funded project ChildRescue – Collective Awareness Platform for Missing Children Investigation and Rescue, emphasising that AI can also be used to protect the human rights of vulnerable population groups such as children.

This slideshow requires JavaScript.

The second panel, which was moderated by Marietje Schaake (ALDE, the Netherlands) focused on the impacts of AI on human rights through the presentation of real life case-studies. The first panellist, Silkie Carlo, Chief Executive of Big Brother Watch, shared her experience of working on real case studies in the United Kingdom that undermine the protection of the rights to privacy, freedom of expression and non-discrimination. She highlighted that flaws in biometric facial recognition used by the Police in the UK can lead to misleading judgments, especially for minorities and women, and recommended that decisions that engage individuals’ human rights must never be purely automated. Lorena Jaume-Palasi, founder of the Ethical Tech Society a non-profit organisation analysing and evaluating processes of automation and digitisation regarding their social relevance, noted that it is not the technology itself, but its use that matters and argued for the need for more reflective oversight structures. In her presentation, she called for a paradigm shift in our approach towards information platforms’ terms of operation and in the principles and values that determine the access to platforms and the degree of their commercial character.

Lofred Madzou, Project Lead, AI & Machine Learning at the World Economic Forum, presented the work of the Center for the Fourth Industrial Revolution and analysed the policy concerns associated with AI, such as the erosion of privacy, algorithmic bias and the abuse of surveillance that could, in his opinion, ‘affect our rights to stand up and protest if AI remains unregulated’. Marietje Schaake noted that algorithmic oversight for AI is urgently needed and that it is essential to ensure that the process of embedding ethical principles and values in AI-based decision-making systems is transparent and inclusive.

The third panel focused on the possible measures and remedies for safeguarding the protection of human rights in the context of AI. The panel was moderated by Michał BONI (EPP, Poland), who noted that Member States are gradually adopting their national AI strategies, but that this could lead to regulatory fragmentation. He therefore argued for the need for ethical certainty and stability for AI. Professor Aimee van Wynsberghe of TU Delft and Member of the High-Level Expert Group on AI, presented the preliminary results of the ongoing STOA study on a new ethical framework for AI. She noted that context and practice matter in our ethical analysis of AI, and recommended the introduction of data hygiene certification, ethics impact assessment and accountability reports. Fanny Hidvegi of Access Now and Member of the High-Level Expert Group on AI, presented a case study examining the use of AI-powered facial recognition tools for law enforcement purposes and recommended the adoption of strict standards for government use of AI. She emphasised that the design, development and deployment of AI and any AI-assisted technologies must be individual-centric and respect human rights. Can Yeginsu a Barrister with 4 New Square Chambers, noted that we do not yet have the potential to leave AI without any human intervention, and may need to ensure individual access to justice and even consider the establishment of an AI ombudsman to handle individual complaints associated with the use/misuse of AI.

Following the panel presentations, the audience raised several interesting questions, in particular regarding how to establish connections between AI, consent, surveillance and human rights, as well as whether the various EU-level ongoing or planned policy initiatives (such as the ethics guidelines for trustworthy artificial intelligence produced by the European Commission’s High-Level Expert Group on AI, the Commission communication on artificial intelligence for Europe, and the European Parliament resolution on a comprehensive European industrial policy on artificial intelligence and robotics), are sufficient to safeguard a human rights lens in the governance of AI. In response, the panellists advocated an ethics-by-design approach that will facilitate the embedding of values such as transparency and explainability in AI development. They also noted that legally binding norms are needed in the field of AI-based decision-making processes, rather than soft-law instruments, and that EU legislators should consider the possibility of integrating a requirement for systematic human rights impact assessments, or even developing new legal mechanisms for redress/remedy for human rights violations resulting from AI.

If you missed out this time, you can access the presentations and watch the webstream of the workshop via the STOA events page.


Related Articles

Discover more from Epthinktank

Subscribe now to keep reading and get access to the full archive.

Continue reading