Written by Mihalis Kritikos,
Artificial intelligence (AI) poses new risks for human rights, as diverse as non-discrimination, privacy, security, freedom of expression, freedom of association, the right to work and access to public services. The current discussion focuses on whether and how the EU could develop a human rights-based approach to AI, given that there are no established methodologies to track effects/harm on human rights, to identify who is being excluded from AI systems and to assess the potential for discrimination in the use of machine learning.
Europe has the opportunity to shape the direction of AI at least from a socio-ethical perspective. The EU’s latest initiatives indicate the desire of its main institutional actors to react swiftly to these major human rights challenges and lead the development of a human-centric AI. More specifically, the European Commission communication on artificial intelligence for Europe (April 2018), launching the EU strategy on AI, made particular reference to the need to invest in people as a cornerstone of a human-centric, inclusive approach to AI, and reaffirmed its support for research into human-AI interaction and cooperation. Recently, the Commission’s High Level Expert Group on AI proposed the first draft AI ethics guidelines to the Commission, which address values protected by the Charter of Fundamental Rights, such as privacy and personal data protection, human dignity, non-discrimination and consumer protection. The guidelines ask all stakeholders to evaluate possible effects of AI on human beings and the common good, and to ensure that AI is human-centric: AI should be developed, deployed and used with an ‘ethical purpose’, grounded in, and reflective of, fundamental rights, societal values and the ethical principles of beneficence, non-maleficence, autonomy of humans, and justice.
The recently adopted European Parliament resolution on a comprehensive European industrial policy on artificial intelligence and robotics makes explicit reference to the need for Europe to take the lead on the global stage by deploying only ethically embedded AI. It recommends that the Member States establish AI ethics monitoring and oversight bodies and encourage companies developing AI to set up ethics boards and draw up ethical guidelines for their AI developers, and requests an ethics-by-design approach that will facilitate the embedding of values such as transparency and explainability in the development of AI. The resolution points out that the guiding ethical framework should be based on the principles and values enshrined in the Charter of Fundamental Rights, as well as on existing ethical practices and codes.
Are these initiatives sufficient in terms of safeguarding a human rights lens in the governance of AI? Do we need legally-binding norms in this field rather than soft-law instruments or even the development of new human rights? Should the EU legislators consider the need to integrate a requirement for systematic human rights impact assessments or even for developing new legal mechanisms for redress/remedy for human rights violations resulting from AI?
The Panel for the Future of Science and Technology (STOA) and Parliament’s Committee on Civil Liberties, Justice and Home Affairs (LIBE) are organising a workshop entitled ‘Is artificial intelligence a human rights issue?’ to discuss and evaluate the efficiency and adequacy of these EU-wide initiatives from a human rights’ perspective. This will be an opportunity to learn more about the effects of AI upon the protection of human rights, and to participate in a debate with key experts in the subject. The workshop will open with a welcome address from STOA Chair Eva Kaili (S&D, Greece), and a keynote speech by Professor Jason M. Schultz, from the NYU School of Law and former Senior Advisor on Innovation and Intellectual Property to the White House, and author (along with Aaron Perzanowski) of ‘The End of Ownership: Personal Property in the Digital Economy‘.
A welcome address from STOA Chair Eva Kaili (S&D, Greece) will be followed with a keynote speech by Professor Jason M. Schultz of the NYU School of Law, former Senior Advisor on Innovation and Intellectual Property to the White House, and author (along with Aaron Perzanowski) of a well-known book on The End of Ownership: Personal Property in the Digital Economy. This will be followed by three panel discussions, including presentations from a wide range of experts.
The first panel includes presentations from Ekkehard Ernst, Chief Macroeconomist, Research Department, ILO, Joanna Goodey, Head of Unit, European Union Agency for Fundamental Rights and Dimitris Panopoulos, Suite 5. Panel 1 will be moderated by STOA Chair, Eva Kaili,
Joining Panel 2 will be Silkie Carlo, Chief Executive of Big Brother Watch, Lorena Jaume-Palasi, founder of the Ethical Tech Society and Lofred Madzou, Project Lead, AI & Machine Learning, World Economic Forum. This panel is moderated by Marietje Schaake (ALDE, the Netherlands).
Panel 3 includes Can Yeginsu, Barrister, 4 New Square Chambers, Professor Aimee van Wynsberghe, TU Delft-Member of the High-Level Expert Group on AI and Fanny Hidvegi, Access Now, Member of the High-Level Expert Group on AI, and is moderated by Michał BONI, (EPP, Poland), who will also moderate the Q&A discussion and debate and make the closing remarks.
Interested in joining the workshop? Watch the live webstream on the STOA event page.