Scientific Foresight (STOA) By / May 6, 2019

Is algorithmic transparency and accountability necessary and/or feasible?

The expected benefits of algorithmic decision systems (ADS) may be offset by the variety of risks they present for individuals (discrimination, unfair practices, loss of autonomy), the economy (unfair practices, limited access to markets), and society as a whole (manipulation, threat to democracy).

©Rashad Ashur/Shutterstock

Written by Mihalis Kritikos,

©Rashad Ashur/Shutterstock

The expected benefits of algorithmic decision systems (ADS) may be offset by the variety of risks they present for individuals (discrimination, unfair practices, loss of autonomy), the economy (unfair practices, limited access to markets), and society as a whole (manipulation, threat to democracy). A significant factor in the adoption of algorithmic systems for decision-making is their capacity to process large amounts of data sets, which are paired with machine learning methods to infer statistical models directly from the data. The same properties of scale, complexity and autonomous model inference, however, are linked to increasing concerns that many of these systems are opaque to the people affected by their use and lack clear explanations for the decisions they make. This lack of transparency risks undermining meaningful scrutiny and accountability, which is a significant concern when these systems are applied as part of decision-making processes that can have a considerable impact on people’s human rights (e.g. critical safety decisions in autonomous vehicles, or allocation of health and social service resources).

As a result, there is growing concern that, unless appropriate governance frameworks are put in place, the opacity of algorithmic systems could lead to situations where individuals are negatively impacted because ‘the computer says NO’, with no recourse to meaningful explanation, a correction mechanism, or recourse to compensation mechanisms. It is therefore necessary to establish clear governance frameworks for algorithmic transparency and accountability, to ensure that the risks and benefits are equitably distributed in a way that does not unduly burden or benefit particular sectors of society.

The European Parliament’s Panel for the Future of Science and Technology (STOA) recently published two studies on algorithmic accountability and transparency, both requested by STOA Chair, Eva Kaili (S&D, Greece). The first study, ‘Understanding algorithmic decision-making: Opportunities and challenges‘, was carried out by the French National Research Institute for the Digital Sciences (Institut national de recherche en informatique et en automatique – Inria), and managed by STOA. It focuses on the technical aspects of ADS, and reviews the opportunities and risks related to their use. Beyond providing an up-to-date and systematic review of the situation, the study sketches policy options for reducing the ethical, political, legal, social and technical challenges and risks related to ADS.

The authors argue that transparency should not be seen as the ultimate solution for users or people affected by the decisions made by an ADS, since source code is illegible to non-experts. ‘Explainability’ is shown to have different meanings and the needs vary considerably according to the audience. It is also important to note that the requirements for explainability vary from one ADS to another, according to the potential impact of the decisions made and whether the decision-making process is fully automated. Although transparency and explainability are essential for reducing the risks related to ADS, the study argues that accountability is the most important requirement as far as the protection of individuals is concerned. In fact, transparency and explainability may allow for the discovery of deficiencies, but do not provide absolute guarantees regarding the reliability, security or fairness of an ADS.

Accountability can be achieved via complementary means, such as algorithmic impact assessments, auditing and certification. The main virtue of accountability is to put the onus on the providers or operators of the ADS to demonstrate that they meet the expected requirements. Accountability cannot provide an absolute guarantee either, but, if certification is rigorous and audits are conducted on a regular basis, potential issues can be discovered and corrective measures taken. In this perspective, the study suggests that oversight agencies and supervisory authorities should play a central role and it is critical that they have all the means they need to carry out their tasks at their disposal, notably the right to access and analyse the details of the ADS, including their source code and, if necessary, the training data.

The second study, ‘A governance framework for algorithmic accountability and transparency’, was carried out by the University of Nottingham, and managed by STOA. The study uses a series of real-life case studies to illustrate how a lack of fairness can arise, before exploring the consequences that such a lack of fairness can have, as well as the complexities inherent to trying to achieve fairness in any given societal context. The study describes ways in which lack of fairness in the outcomes of algorithmic systems might result from developmental decision-making and design features embedded at different points in the lifecycle of an algorithmic decision-making model. A connection is made between the problem of fairness and the tools of transparency and accountability, while highlighting the value of responsible research and innovation (RRI) approaches to pursuing fairness in algorithmic systems. Central to RRI is enabling an inclusive, reflexive and accountable innovation process through the involvement of relevant stakeholders throughout the entirety of the innovation life cycle. In relation to the development of algorithms, this would likely involve a contextualised consideration of an algorithm to determine the most relevant stakeholders, including the establishment of mechanisms such as stakeholder workshops and focus groups.

The study develops policy options for the governance of algorithmic transparency and accountability, based on an analysis of the social, technical and regulatory challenges posed by algorithmic systems. It begins from a high-level perspective on fundamental approaches to technology governance, then provides a detailed consideration of various categories of governance options, and finally reviews specific proposals for governance of algorithmic systems discussed in the existing literature. Based on an extensive review and analysis of existing proposals for the governance of algorithmic systems, the authors propose a set of four policy options, each of which addresses a different aspect of algorithmic transparency and accountability: (i) awareness raising: education, watchdogs and whistle-blowers; (ii) accountability in public-sector use of algorithmic decision-making; (iii) regulatory oversight and legal liability; and (iv) global coordination of algorithmic governance.

Your opinion counts for us. To let us know what you think, get in touch via STOA@europarl.europa.eu.


Related Articles

Discover more from Epthinktank

Subscribe now to keep reading and get access to the full archive.

Continue reading