Scientific Foresight (STOA) By / August 3, 2021

New STOA study on deepfakes and European policy

Deepfakes can be used for a wide variety of purposes, with wide-ranging impacts. They can be put to good use in media production, human-machine interactions, video conferencing, satire, creativity and some novel medical applications such as voice creation.

© Rathenau Instituut

Written by Philip Boucher.

Cutting-edge artificial intelligence (AI) techniques have enabled the production of highly realistic videos that manipulate how people look, and the things that they appear to say or do. These fabrications are commonly referred to as ‘deepfakes’. The Panel for the Future of Science and Technology (STOA) commissioned a study to examine deepfakes and to develop and assess a range of policy options focusing in particular upon the proposed AI (AIA) and digital services acts (DSA), as well as the General Data Protection Regulation (GDPR).

The full study report sets out the key features of deepfake technologies, their technical, societal and regulatory context, and their impacts at individual, group and societal levels, before setting out a range of policy options targeting legislative files that are currently under debate at the European Parliament. These options are also presented in the accompanying STOA Options Brief.

Please accept YouTube cookies to play this video. By accepting you will be accessing content from YouTube, a service provided by an external third party.

YouTube privacy policy

If you accept this notice, your choice will be saved and the page will refresh.

Deepfakes can be used for a wide variety of purposes, with wide-ranging impacts. They can be put to good use in media production, human-machine interactions, video conferencing, satire, creativity and some novel medical applications such as voice creation. However, they also have substantial potential for misuse. The broad range of possible risks can be differentiated into three categories of harm: psychological, financial and societal. The impacts of a single deepfake are not limited to a single type or category of risk, but rather to a combination of cascading impacts at different levels. Since deepfakes tend to have a particular personal target, the impact often starts at this individual level. Yet they can cause harm to specific groups or organisations which can accumulate into widespread harms on the broad societal level. The infographic depicts three scenarios that illustrate the potential impacts of three types of deepfakes on the individual, group and societal levels: a falsified pornographic video; a manipulated sound clip given as evidence in court; and a false statement to influence a political process.

In the final stage of the study, the authors identified several policy options targeting different dimensions of deepfake technologies.

Technology: The technology dimension concerns the underlying technologies and tools that are used for generating deepfakes, and the actors that develop deepfake production systems. Policy options in the technology dimension are particularly relevant in the context of the proposed AIA, and include clarifying the obligations and prohibitions on deepfake technology providers, limiting their spread, developing systems to restrict their impact, and investing in education and awareness raising amongst IT professionals.

Five dimensions of policy measures to mitigate the risks of deepfakes
© Rathenau Instituut

Creation: While the technology dimension concerns the production of deepfake generation systems, the creation dimension concerns those that use such systems to produce deepfakes. Policy options here include clarifying how deepfakes should be labelled while limiting exceptions and banning certain applications. It also explores whether online anonymity could be limited for some practices, and highlights measures that harness diplomacy, international agreements and technology transfer.

Circulation: Policy options in the circulation dimension are particularly relevant in the context of the proposed DSA, which provides opportunities to limit the dissemination and circulation of deepfakes. They include measures concerning the detection of deepfakes, establishing labelling and take-down procedures, ensuing oversight of content moderation decisions, and slowing the circulation of deepfakes while increasing transparency.

Target: Malicious deepfakes can have severe impacts on targeted individuals, which may be more profound and long-lasting than for many traditional patterns of crime. Policy options in the target dimension include institutionalised support for victims of deepfakes, and addressing authentication and verification procedures for court evidence. Several options are connected with the GDPR, including guidelines on its GDPR application to deepfakes, strengthening the capacity of Data Protection Authorities, extending the scope of personal data protection to include voice and facial data, developing a unified approach for the proper use of personality rights and protecting the personal data of deceased persons.

Audience: Audience response is a key factor to the extent that deepfakes can transcend the individual level and have wider group or societal impacts. Policy options addressing these elements include establishing authentication systems, investing in media literacy, a pluralistic media landscape and high-quality journalism.

Finally, these options are complemented by some overarching institutional and organisational measures to support actions across all five of the dimensions discussed above. These include options to systematise and institutionalise the collection of information with regard to deepfakes to protect organisations against deepfake fraud and to help them identify weaknesses and share best practices.

The full set of policy options are set out in greater detail in the accompanying STOA Options Brief.

Read the full report and accompanying STOA Options Brief to find out more. The study will be presented by its authors at a STOA Panel meeting this autumn.

Your opinion counts for us. To let us know what you think, get in touch via stoa@europarl.europa.eu.

Please accept YouTube cookies to play this video. By accepting you will be accessing content from YouTube, a service provided by an external third party.

YouTube privacy policy

If you accept this notice, your choice will be saved and the page will refresh.


Related Articles

Discover more from Epthinktank

Subscribe now to keep reading and get access to the full archive.

Continue reading