Scientific Foresight (STOA) By / May 3, 2019

The challenges of regulating disinformation with artificial intelligence

The spread of false information is not merely a reputational threat for news providers in general, but is a risk factor for democratic order. People are increasingly concerned about the influence of fake news on elections in the USA and Europe.

©Jirsak/Shutterstock

Written by Mihalis Kritikos,

©Jirsak/Shutterstock

The spread of false information is not merely a reputational threat for news providers in general, but is a risk factor for democratic order. People are increasingly concerned about the influence of fake news on elections in the USA and Europe.

The recent surge of advances in artificial intelligence (AI) has opened up new opportunities in a wide range of different fields, including developments that suggest that the technology can be leveraged to tackle the ‘fake news’ problem. Given the limited capacity of manual fact-checking, automated content recognition (ACR) technologies have been promoted as a solution to identifying disinformation, fake news and other threats. Across the European Union, Member States, public bodies and private entities are combining technology and human expertise to tackle fake news. As they are based on algorithms, however, these technological initiatives may affect freedom of expression and media pluralism, and strengthen centralised control of what can be published online, given that the notion of ‘fake news’ is too vague to prevent a subjective and arbitrary interpretation.

The European Parliament’s Panel for the Future of Science and Technology (STOA) recently published two studies on disinformation and artificial intelligence. The first study, under the title ‘Automated tackling of disinformation’, was carried out by EU DisinfoLab and managed by STOA. The study, requested by STOA panel member María Teresa Giménez Barbat (ALDE, Spain), maps and analyses current and future threats from online disinformation, alongside currently adopted socio-technical and legal approaches to countering these threats. The study also discusses the challenges of evaluating the effectiveness and practical adoption of these approaches. Drawing on and complementing existing literature, the study summarises and analyses the findings of scientific studies and policy reports in relation to detecting, containing and countering online disinformation and propaganda campaigns. It traces recent developments and trends, identifies significant new or emerging challenges, and addresses the potential policy implications of current socio-technical solutions for the EU.

This study first defines the technological, legal, societal and ethical dimensions of the disinformation phenomenon, and argues strongly in favour of adopting the terms ‘misinformation’, ‘disinformation’ and ‘malinformation’, instead of the ill-defined ‘fake news’. Next, it discusses how social platforms, search engines, online advertising and computer algorithms enable and facilitate the creation and spread of online disinformation. It also presents the current understanding as to why people believe false narratives, what motivates people to share them, and how they impact offline behaviour (e.g. voting). Drawing on existing literature, the study also summarises state-of-the-art technological approaches to fighting online misinformation. A brief overview of self-regulation, co-regulation and classic regulatory responses, as currently adopted by social platforms and EU Member States, complements the study. In addition, the study summarises civil-society and other citizen-oriented approaches (e.g. media literacy). The authors have also compiled a roadmap of initiatives from key stakeholders in Europe and beyond, spanning the technological, legal and social dimensions. Three in-depth case studies on the utility of automated technology in detecting, analysing and containing online disinformation complement their analyses. The study concludes with the provision of policy options and makes reference to the stakeholders best placed to act upon these at national and European level. The options include support for research and innovation on technological responses; improving the transparency and accountability of platforms and political actors over content shared online; strengthening media and improving journalism standards; and supporting a multi-stakeholder approach, including involving civil society.

The second study, entitled ‘Regulating disinformation with artificial intelligence’ was carried out by Vesalius College, Brussels, and managed by STOA. The study, requested by Isabela Adinolfi (EFDD, Italy) and Stelios Kouloglou (GUE, Greece), looks at the consequences of the use of AI and ACR systems in the fight against disinformation for freedom of speech, pluralism and democracy, reviews some of the key ideas from the field and highlights their relevance to European policy. The study examines the trade-offs involved in using automated technology to limit the spread of disinformation online and presents (self-regulatory to legislative) options for regulating ACR technologies in this context. The opportunities for the European Union as a whole to take the lead in setting the framework for designing these technologies in a way that enhances accountability and transparency, and respects free speech, are a particular focus.

The authors emphasise that disinformation is best tackled through media pluralism and literacy initiatives, as these allow diversity of expression and choice, whereas source transparency indicators are preferable over (de)prioritisation of disinformation. They advise against regulatory action that would encourage increased use of AI for content moderation purposes, without strong human review and appeal processes. The study argues that introduction, as soon as feasible, of independent appeal and audit procedures is necessary when platforms moderate content and accounts. There is scope for standardising notice and appeal procedures and reporting, and creating a self- or co-regulatory multi-stakeholder body, such as a ‘social media council’, as suggested by the UN Special Rapporteur to the United Nations Human Rights Council. As the Special Rapporteur, in the first-ever UN report that examines the regulation of user-general online content, recommends: this multi-stakeholder body could, on the one hand, have competence to deal with industry-wide appeals; and, on the other, work towards a better understanding and minimisation of the effects of AI on freedom of expression and media pluralism. The study concludes that, given the lack of independent evidence or detailed research in this policy area, greater transparency in the variety of AI and disinformation-reduction techniques used by online platforms and content providers must be introduced.

Your opinion counts for us. To let us know what you think, get in touch via stoa@europarl.europa.eu


Related Articles

Discover more from Epthinktank

Subscribe now to keep reading and get access to the full archive.

Continue reading