Scientific Foresight (STOA) By / June 1, 2022

STOA study on diverging obligations facing public and private sector applications of artificial intelligence

Recent reports have examined the use of AI in a range of public-sector activities, in particular for improving the efficiency and quality of public services and citizen engagement.

© Adobe Stock

Written by Philip Boucher.

A recently published Panel for the Future of Science and Technology (STOA) study identifies and examines sources of divergence in the obligations facing public and private sector actors when applying artificial intelligence in the draft artificial intelligence act. It focuses in particular upon artificial intelligence (AI) that is designed to manipulate people, social scoring and biometric identification and develops a range of policy options.

Recent reports have examined the use of AI in a range of public-sector activities, in particular for improving the efficiency and quality of public services and citizen engagement. However, concerns have also been identified regarding the use of AI in some specific public-sector domains such as law enforcement. The spectre of ‘social scoring’ and ’emotion recognition’ applications has raised substantial discussion in Europe.

The proposed artificial intelligence act (AIA) places restrictions on how certain public-sector actors may use specific AI applications, most notably on the use of real-time biometric identification for law enforcement purposes, and the use of ‘social scoring’ applications by public authorities. For example, its Article 5 would prohibit:

‘the placing on the market, putting into service or use of AI systems by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, with the social score leading to either or both of the following:

(i) detrimental or unfavourable treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected;

(ii) detrimental or unfavourable treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behaviour or its gravity’

As well as:

‘the use of “real-time” remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement’

Other elements of the proposed AI act would also target public authorities. For example, the following AI applications are defined in Annex III as ‘high risk’ (in the context of migration, asylum and border-control management) and would therefore be subject to stricter market controls:

‘AI systems intended to be used by competent public authorities as polygraphs and similar tools or to detect the emotional state of a natural person’.

By placing obligations specifically upon certain state actors, these articles introduce divergences in the obligations facing public and private sector actors when applying AI. These may raise further questions about whether private sector actors should be able to make use of real-time biometric identification, social scoring and emotion recognition techniques.

The prohibition of ‘real-time’ remote biometric identification systems for law enforcement purposes has been subject to substantial debate in the European Parliament and beyond. Some have called for restrictions on these applications to apply to all actors, and others for the restrictions on state actors to be relaxed, a debate that continues in the negotiations on the draft AIA.

STOA has published a study that identifies and examines these sources of divergence in the obligations facing public and private sector actors when applying AI in the draft AIA. It focuses in particular upon manipulative AI, social scoring and biometric identification and develops a range of policy options in response to the challenges identified.

The study finds that both public and private sector applications of AI could present risks of direct or indirect harms, and that there is a degree of convergence in these risks. They observe that the divergences amount to treating similar AI applications differently depending on the actors that deploy them, while noting that the risk levels associated with these uses do not differ substantially, and exhibit similar degrees of power asymmetry with regards to individuals.

In the final stage of the study, the authors identified three broad policy options. To address incoherence in risk assessments and introduce explicit risk criteria, to consider strengthening information and disclosure obligations with withdrawal rights, and to consider non-linear modes of governing and co-regulation strategies. The full set of policy options are set out in greater detail in the accompanying STOA Options Brief.

Read the full report and STOA options brief to find out more. The study was presented by its authors to the STOA Panel at its meeting on 10 March 2022.

Your opinion counts for us. To let us know what you think, get in touch via stoa@europarl.europa.eu.


Related Articles

Be the first to write a comment.

Leave a Reply