you're reading...
BLOG, Events

Digital revolution and legal evolution: Athens Roundtable on the Rule of Law and Artificial Intelligence

Written by Mihalis Kritikos,

Artificial intelligence (AI) is affecting the architecture and implementation of law in several ways. AI systems are being introduced in regulatory and standards-setting bodies and courts in several jurisdictions, to advance the functions of the la w and facilitate access to justice. Sound standards and certifications for AI systems need to be created so that judges, lawyers and citizens alike know when to trust and when to mistrust AI. Within this frame, several questions arise: Do we need ‘legal protection by design’? What are the legal and ethical boundaries to AI systems? Are existing legal frameworks adequate to cope with the challenges associated with the deployment of AI?

To respond to these questions and in view of the recent launch of its new Centre for Artificial Intelligence (C4AI), STOA co-hosted the 2020 edition of the Athens Roundtable on Artificial Intelligence and the Rule of Law, on 16‑17 November 2020 with the United Nations Educational, Scientific and Cultural Organization (UNESCO) and other prominent institutions. Co-founded in 2019 by IEEE SA, the Future Society, and ELONtech, the Roundtable was held virtually, from New York, under the patronage of H.E. the President of the Hellenic Republic Katerina Sakellaropoulou. The mission of the Athens Roundtable is to advance the global dialogue on policy, practice, international cooperation, capacity-building and evidence-based instruments for the trustworthy adoption of AI in government, industry and society, under the prism of legal systems, the practice of law and regulatory compliance. The two-day event, which attracted more than 700 attendees, reviewed progress on the AI governance initiatives of key participating legislative, regulatory and non-regulatory bodies, exchanged views on emerging best practices, discussed the world’s most mature AI standards and certification initiatives, and examined those initiatives in the context of specific real-world AI applications.

The event featured prominent speakers from international regulatory and legislative bodies, industry, academia and civil society. There was consensus that, to protect our democracies, it is imperative to ensure the deployment of AI in ways that do not undermine the rule of law. The speakers agreed that the trustworthy adoption of artificial intelligence is predicated on a thorough examination of the effectiveness of AI systems and on constant review of their legal soundness, especially in high-risk domains. This is critical to ensure that societies capture the upsides of AI while minimising its downsides and risks. During the discussion, the use of algorithmic systems to support or even to fully assume the function of the decision-making process in legal questions directly affecting humans emerged as a key issue: ‘black box’ algorithms, possibly developed on the basis of potentially biased data, and with no clear chain of accountability should be considered as unacceptable. The representatives of all major international organisations agreed on the need for a strengthened working relationship between the EU, the Organisation for Economic Co-operation and Development (OECD), UNESCO and the Council of Europe – as a critical success factor in establishing impactful governance frameworks and protocols leveraging the entire policy toolbox smartly from ‘self’ to ‘soft’ and ‘hard’ regulation.

In both her opening and closing remarks, STOA Chair Eva Kaili (S&D, Greece) highlighted that Europe should lead these efforts and pave the way for the establishment of a legal framework on human-centric AI that is similar to the General Data Protection Regulation (GDPR) and for the development of some commonly agreed metrics for ethical AI. In her view, the rule of law will have to be synonymous with governments and big corporations being prevented from using AI technologies to gain access to citizens’ sensitive personal data or from using perception manipulation techniques to that end. The panellists also agreed that enhanced algorithmic scrutiny is necessary, combined with a thorough assessment of the quality of such computer-based decision-supporting systems, with regard to their level of transparency, to the provision of a meaningful scheme of accountability and to assurance of minimisation of bias.

The discussion also focused on the various ways AI can be regulated, as well as on how algorithmic decision-making systems can be controlled and audited, including the methodologies needed to analyse automated systems for possible flaws and to identify common ways of risk calibration. Carl Bildt, former Prime Minister of Sweden, recommended that the EU should closely cooperate with organisations like UNESCO to specify its ethical principles and should create, along with its transatlantic partners, equivalent systems of trust for all parts of society. Algorithmic bias in legal and judicial environments became a topic of discussion across almost all panels and most recommendations agreed on the necessity to build AI systems that are as diverse as our societies, given that technology can become a magnifier of social inequalities.

The speakers also emphasised the need to intensify efforts to regulate weaponised AI and reach an international agreement on definitional issues and the red lines that should be drawn when developing and deploying AI applications in critical domains. In several sessions, the issue of training and education to enhance algorithmic literacy was advanced as a key requirement for safeguarding citizens’ trust as well as for allowing users to exercise, in an meaningful way, their right to be forgotten, their right to an explanation when their data are being used for AI algorithms, and the right to redress against decisions made by AI systems. Several regulators also highlighted the mismatch between the traditional regulatory approach and the fast pace of technology developments in the domain of AI that point to the urgent need to introduce smart regulatory instruments, including ethical impact assessments.

In her concluding remarks, STOA Chair Eva Kaili underlined that a privacy-by-design and ethics-by-design approach should be followed throughout the entire lifecycle of AI systems, from their initial development to actual implementation especially in the legal domain. In a period of intense digital interdependence, where AI strategies and ethical principles are increasingly adopted at an organisational level worldwide, multi-stakeholder engagement, such as the Athens Roundtable, is critical to identifying and disseminating widely adopted practices for operationalising trustworthy AI.

The full recording of the meeting is available here.

About Scientific Foresight (STOA)

The Scientific Foresight Unit (STOA) carries out interdisciplinary research and provides strategic advice in the field of science and technology options assessment and scientific foresight. It undertakes in-depth studies and organises workshops on developments in these fields, and it hosts the European Science-Media Hub (ESMH), a platform to promote networking, training and knowledge sharing between the EP, the scientific community and the media. All this work is carried out under the guidance of the Panel for the Future of Science and Technology (STOA), composed of 27 MEPs nominated by 11 EP Committees. The STOA Panel forms an integral part of the structure of the EP.

Discussion

2 thoughts on “Digital revolution and legal evolution: Athens Roundtable on the Rule of Law and Artificial Intelligence

  1. This is an extremely relevant discussion in all spheres including of course medicine. thanks

    Like

    Posted by chariklia tziraki | January 18, 2021, 19:13

Trackbacks/Pingbacks

  1. Pingback: Digital revolution and legal evolution: Athens Roundtable on the Rule of Law and Artificial Intelligence | Vatcompany.net - January 13, 2021

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Download the EPRS App

EPRS App on Google Play
EPRS App on App Store
What Europe Does For You
EU Legislation in Progress
Topical Digests
EPRS Podcasts

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,443 other followers

RSS Link to Scientific Foresight (STOA)

Disclaimer and Copyright statement

The content of all documents (and articles) contained in this blog is the sole responsibility of the author and any opinions expressed therein do not necessarily represent the official position of the European Parliament. It is addressed to the Members and staff of the EP for their parliamentary work. Reproduction and translation for non-commercial purposes are authorised, provided the source is acknowledged and the European Parliament is given prior notice and sent a copy.

For a comprehensive description of our cookie and data protection policies, please visit Terms and Conditions page.

Copyright © European Union, 2014-2019. All rights reserved.

%d bloggers like this: