you're reading...
BLOG

Is algorithmic transparency and accountability necessary and/or feasible?

Written by Mihalis Kritikos,

©Rashad Ashur/Shutterstock

The expected benefits of algorithmic decision systems (ADS) may be offset by the variety of risks they present for individuals (discrimination, unfair practices, loss of autonomy), the economy (unfair practices, limited access to markets), and society as a whole (manipulation, threat to democracy). A significant factor in the adoption of algorithmic systems for decision-making is their capacity to process large amounts of data sets, which are paired with machine learning methods to infer statistical models directly from the data. The same properties of scale, complexity and autonomous model inference, however, are linked to increasing concerns that many of these systems are opaque to the people affected by their use and lack clear explanations for the decisions they make. This lack of transparency risks undermining meaningful scrutiny and accountability, which is a significant concern when these systems are applied as part of decision-making processes that can have a considerable impact on people’s human rights (e.g. critical safety decisions in autonomous vehicles, or allocation of health and social service resources).

As a result, there is growing concern that, unless appropriate governance frameworks are put in place, the opacity of algorithmic systems could lead to situations where individuals are negatively impacted because ‘the computer says NO’, with no recourse to meaningful explanation, a correction mechanism, or recourse to compensation mechanisms. It is therefore necessary to establish clear governance frameworks for algorithmic transparency and accountability, to ensure that the risks and benefits are equitably distributed in a way that does not unduly burden or benefit particular sectors of society.

The European Parliament’s Panel for the Future of Science and Technology (STOA) recently published two studies on algorithmic accountability and transparency, both requested by STOA Chair, Eva Kaili (S&D, Greece). The first study, ‘Understanding algorithmic decision-making: Opportunities and challenges‘, was carried out by the French National Research Institute for the Digital Sciences (Institut national de recherche en informatique et en automatique – Inria), and managed by STOA. It focuses on the technical aspects of ADS, and reviews the opportunities and risks related to their use. Beyond providing an up-to-date and systematic review of the situation, the study sketches policy options for reducing the ethical, political, legal, social and technical challenges and risks related to ADS.

The authors argue that transparency should not be seen as the ultimate solution for users or people affected by the decisions made by an ADS, since source code is illegible to non-experts. ‘Explainability’ is shown to have different meanings and the needs vary considerably according to the audience. It is also important to note that the requirements for explainability vary from one ADS to another, according to the potential impact of the decisions made and whether the decision-making process is fully automated. Although transparency and explainability are essential for reducing the risks related to ADS, the study argues that accountability is the most important requirement as far as the protection of individuals is concerned. In fact, transparency and explainability may allow for the discovery of deficiencies, but do not provide absolute guarantees regarding the reliability, security or fairness of an ADS.

Accountability can be achieved via complementary means, such as algorithmic impact assessments, auditing and certification. The main virtue of accountability is to put the onus on the providers or operators of the ADS to demonstrate that they meet the expected requirements. Accountability cannot provide an absolute guarantee either, but, if certification is rigorous and audits are conducted on a regular basis, potential issues can be discovered and corrective measures taken. In this perspective, the study suggests that oversight agencies and supervisory authorities should play a central role and it is critical that they have all the means they need to carry out their tasks at their disposal, notably the right to access and analyse the details of the ADS, including their source code and, if necessary, the training data.

The second study, ‘A governance framework for algorithmic accountability and transparency’, was carried out by the University of Nottingham, and managed by STOA. The study uses a series of real-life case studies to illustrate how a lack of fairness can arise, before exploring the consequences that such a lack of fairness can have, as well as the complexities inherent to trying to achieve fairness in any given societal context. The study describes ways in which lack of fairness in the outcomes of algorithmic systems might result from developmental decision-making and design features embedded at different points in the lifecycle of an algorithmic decision-making model. A connection is made between the problem of fairness and the tools of transparency and accountability, while highlighting the value of responsible research and innovation (RRI) approaches to pursuing fairness in algorithmic systems. Central to RRI is enabling an inclusive, reflexive and accountable innovation process through the involvement of relevant stakeholders throughout the entirety of the innovation life cycle. In relation to the development of algorithms, this would likely involve a contextualised consideration of an algorithm to determine the most relevant stakeholders, including the establishment of mechanisms such as stakeholder workshops and focus groups.

The study develops policy options for the governance of algorithmic transparency and accountability, based on an analysis of the social, technical and regulatory challenges posed by algorithmic systems. It begins from a high-level perspective on fundamental approaches to technology governance, then provides a detailed consideration of various categories of governance options, and finally reviews specific proposals for governance of algorithmic systems discussed in the existing literature. Based on an extensive review and analysis of existing proposals for the governance of algorithmic systems, the authors propose a set of four policy options, each of which addresses a different aspect of algorithmic transparency and accountability: (i) awareness raising: education, watchdogs and whistle-blowers; (ii) accountability in public-sector use of algorithmic decision-making; (iii) regulatory oversight and legal liability; and (iv) global coordination of algorithmic governance.

Your opinion counts for us. To let us know what you think, get in touch via STOA@europarl.europa.eu.

About Scientific Foresight (STOA)

The Scientific Foresight Unit (STOA) carries out interdisciplinary research and provides strategic advice in the field of science and technology options assessment and scientific foresight. It undertakes in-depth studies and organises workshops on developments in these fields, and it hosts the European Science-Media Hub (ESMH), a platform to promote networking, training and knowledge sharing between the EP, the scientific community and the media. All this work is carried out under the guidance of the Panel for the Future of Science and Technology (STOA), composed of 25 MEPs nominated by nine EP Committees. The STOA Panel forms an integral part of the structure of the EP.

Discussion

3 thoughts on “Is algorithmic transparency and accountability necessary and/or feasible?

  1. For Algorithmic Decision Systems for public items, the algorithms or alogrithmic groups,should be transparency and be used for potential analysis or decision reference, which cann’t replace votes, opinions and nomocracy. As for personal algorithmic model, it should be of course not transparency.

    EU can take the reform of most simplified qualifications to form least qualifications in the form of e-qualification required by EU or constitutional law within EU via EU E-qualification E-platform. Some of the e-qualifications can be memberships of lawful EU societies or associations, and no other qualified certificates will be requested for positions funded by EU budgets except for necessary requirement of the EU e-qualifications.

    Like

    Posted by Victor | May 31, 2019, 17:24

Trackbacks/Pingbacks

  1. Pingback: Artificial intelligence, data protection and elections | Vatcompany.net - May 21, 2019

  2. Pingback: Artificial intelligence, data protection and elections | European Parliamentary Research Service Blog - May 21, 2019

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Download the EPRS App

EPRS App on Google Play
EPRS App on App Store
What Europe Does For You
EU Legislation in Progress
Topical Digests
EPRS Podcasts

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,089 other followers

RSS Link to Scientific Foresight (STOA)

Disclaimer and Copyright statement

The content of all documents (and articles) contained in this blog is the sole responsibility of the author and any opinions expressed therein do not necessarily represent the official position of the European Parliament. It is addressed to the Members and staff of the EP for their parliamentary work. Reproduction and translation for non-commercial purposes are authorised, provided the source is acknowledged and the European Parliament is given prior notice and sent a copy.

For a comprehensive description of our cookie and data protection policies, please visit Terms and Conditions page.

Copyright © European Union, 2014-2019. All rights reserved.

%d bloggers like this: