Members' Research Service By / May 19, 2022

Future Shocks 2022: Building a healthier online environment for healthy democracies [Policy Podcast]

The last two decades have been marked by the unprecedented development of the online world, giving rise to new ways to work, shop, socialise and spend time online.

Written by Tambiama Madiega and Costica Dumbrava.

This paper is one of 11 policy responses set out in a new EPRS study which looks first at 15 risks facing the European Union, in the changed context of a world coming out of the coronavirus crisis, but one in which a war has been launched just outside the Union’s borders. The study then looks in greater detail at 11 policy responses the EU could take to address the risks outlined and to strengthen the Union’s resilience to them. It continues a series launched in spring 2020, which sought to identify means to strengthen the European Union’s long-term resilience in the context of recovery from the coronavirus crisis. Read the full study here.

The issue in short: The challenge and the existing gaps

The last two decades have been marked by the unprecedented development of the online world, giving rise to new ways to work, shop, socialise and spend time online. This has provided new opportunities for citizens to access political information, discuss issues, and engage in politics, as well as new possibilities for political actors to influence public opinion, mobilise people and organise electoral campaigns. Online ecosystems are thus likely to have a great impact on democratic politics and on broader democratic institutions in societies in the EU and elsewhere.

Online environments and digital technologies underpinning them, such as algorithmic decision systems, pose several key challenges to democracy, including:

  • Distortion of public opinion through online filtering, ranking and moderation of content and interactions (e.g. via newsfeed algorithms, de/prioritisation or removal of content, polarisation of views, suspension of accounts). This is, at least partly, an unintended consequence of the business models and technologies supporting online ecosystems.
  • Manipulation of political views and preferences through online disinformation – the deliberate use of algorithms, bots, trolls, deep fakes, etc. to spread false content. Whereas disinformation is a result of a complex interaction between people and ecosystems, manipulative algorithmic systems play a key role in amplifying disinformation.
  • Distortion of electoral competition through deceiving online messages and political adverts – the use of intrusive and covert techniques to persuade, confuse, or intimidate voters from casting their votes (political microtargeting). The speed and efficiency of online political campaigns that rely on extracting and analysing troves of data to target highly specific groups increases the negative impact of disinformation and manipulative ads.
  • Weakening the integrity of elections though foreign interference and cyber-attacks – concerted campaigns to distort opinions, influence election results and undermine electoral institutions and infrastructure. Cyber threats and the manipulative influence of foreign governments and media on elections in the EU is becoming a destabilising factor for EU democracies.

The EU population is well aware of the magnitude of such challenges. A 2019 Eurobarometer survey showed that more than half of Europeans who use the internet say they have been exposed to or personally witnessed disinformation online. Nearly four out of ten Europeans have been exposed to content where they could not easily determine whether it was a political advertisement or not, and nearly six in ten Europeans are concerned about the possibility of ‘elections being manipulated through cyberattacks’.

A 2019 study for the European Parliament showed how major disinformation campaigns have, in recent years, interfered with democratic processes, particularly elections and referenda. The deployment of hybrid threats by Russia, with massive disinformation campaigns and cybersecurity attacks, including more recently during the ongoing war in Ukraine, provides a live example of the potential detrimental impact of such actions on democracy. To ensure that democratic debate and future elections take place under the highest democratic standards, the EU needs to build a healthier online environment. This would require measures to fill in existing policy gaps, including: safeguarding the integrity of elections in the EU; establishing an adequate regulation and institutional oversight of how algorithms are used for political purposes; equipping citizens with skills and tools to fend off online disinformation and manipulation; increasing cyber resilience of electoral processes and infrastructures; and promoting healthy online environment standards worldwide.

Existing policy responses

EU action

In recent years, the EU has made active efforts to ensure a safer online environment. The EU approach builds on four axes: strengthening digital platforms’ self-regulation; imposing a set of mandatory rules on the biggest online actors to ensure a safer online environment; regulating online political advertising; and reinforcing EU capacities to tackle disinformation and cyber threats.

Strengthening self-regulation of online platforms. In 2018, the EU published an action plan against disinformation and adopted a Code of Practice on Disinformation asking online entities such as platforms, major social networks and advertisers to address the spread of disinformation. A wide range of companies, including Facebook, Google, Twitter, Microsoft and TikTok, have implemented the EU Code and committed, on a voluntary basis, to fight disinformation. In December 2021, the Commission presented its European democracy action plan to empower citizens and build more resilient democracies across the EU, to be gradually implemented until 2023 – a year ahead of the elections to the European Parliament. In this action plan, the Commission envisages revising the Code of Practice on Disinformation to introduce new measures, including reducing financial incentives for disinformation, empowering users to take an active role in preventing its spread, and cooperating better with fact-checkers across EU Member States. The revised Code is intended to serve as part of a co-regulatory framework with the Digital Services Act to help platforms mitigate risks stemming from disinformation.

Mandatory rules on online platforms. The Digital Services Act (DSA) proposal, tabled in December 2020, aims to create a safer and trusted online environment and set EU-wide rules to ensure transparency, accountability and institutional oversight of the EU online space. A set of new rules to be imposed on online platforms includes transparency obligations to mitigate the adverse effect of online advertising for citizens. Furthermore, it is proposed that very large online platforms (or VLOP) are subject to tighter obligations, given the particular impact they have on the economy and society and their potential responsibility regarding the dissemination of illegal content and societal harms. Such companies will be required to assess the systemic risks stemming from the functioning and use of their services, and especially the intentional manipulation of their services, for instance through the creation of fake accounts and the widespread dissemination of information having a negative effect (e.g. on electoral processes). The new set of rules constitutes a step towards more cooperative and regulatory mechanisms in line with the European democracy action plan. The DSA proposal is currently the subject of protracted negotiations by the co-legislators.

Regulation of on-line political advertising. The Commission adopted a Proposal for a Regulation on the transparency and targeting of political advertising in November 2021 as a follow-up to the European democracy action plan. This initiative covers both online and offline activities and complements the proposal for the DSA. The new rules would require any political advert to be clearly labelled as such and to include information such as who paid for it and how much. In addition, political targeting and amplification techniques would need to be explained publicly in detail and would be banned when using sensitive personal data without the explicit consent of the individual.

Reinforcing EU capacities to tackle cyber threats and disinformation in an international context. The 2016 EU framework on countering hybrid threats is being complemented by a number of initiatives to better protect democratic processes from manipulation by third countries or private interests. The EU’s East StratCom Task Force, created in 2015, has been reinforced since then to counter disinformation by the Russian Federation and its affiliates throughout Europe. In December 2020, the Commission and the High Representative of the Union for Foreign Affairs and Security Policy presented a new EU cybersecurity strategy aiming to bolster Europe’s collective resilience against cyber threats and ensure that all citizens and businesses can fully benefit from trustworthy and reliable services and digital tools. In this context, two legislative proposals – a Directive on measures for a high common level of cybersecurity across the Union (NIS 2) and a Directive on the resilience of critical entities – are being finalised. In addition, Cyber Rapid Response Teams and the European Digital Media Observatory have been set up to assist Member States in order to ensure a higher level of cyber resilience and to respond collectively to cyber incidents and disinformation. The EU has also shown its ability to adopt extraordinary measures with its sanctions, recently adopted in a Council decision, to suspend the broadcasting activities of the main Russian state-controlled media outlets in the Union. However, more action is being advocated, while the efficacy of such measures in tackling disinformation on a larger scale has been questioned and recent research shows how disinformation about the ongoing conflict in Ukraine is being funded by online advertising. Furthermore, the new EU cybersecurity strategy was launched in 2020 to better address cybersecurity challenges in the EU and make physical and digital critical entities more resilient. Against this backdrop, in March 2022 the Council called for more action at EU level to ensure resilience of electronic communications infrastructure and networks in Europe, including more cooperation at operational level, the adoption of the forthcoming Cyber Resilience Act and the creation of a cybersecurity emergency response fund.

Figure 46: Key measures to ensure a healthier online environment for healthy democracies
Figure 46: Key measures to ensure a healthier online environment for healthy democracies

Obstacles to implementation

The development of digital environments has inevitably challenged legal frameworks that were devised to tackle pre-digital issues, e.g. traditional media, ‘paper’ advertising, TV-based electoral campaigning. The consolidation of a business model based on the accumulation and monetisation of data, along with insights from digital interactions and growing concern about the negative impact of this model on users and society, has put tremendous pressure on legislators to intervene.

However, uncertainty persists about the extent of the problem, the type of interventions that would be suitable and their broader implications, and the right balance between competing rights and interests (e.g. freedom of expression versus protection of democratic institutions). A key issue is the limited access to data and the scarcity of systematic research (in particular outside the US context) on the impact of online platforms and algorithms on individuals’ rights, social interactions and political institutions.

Tackling online disinformation has proved to be particularly difficult, requiring concerted efforts by online platforms, regulators, civil organisations, users, etc. The EU’s self-regulatory approach to tackling disinformation – based on voluntary standards and commitments – has, as the Commission acknowledged, led to limited results. The process for revising the Code of Practice on Disinformation is quite slow and was not finalised in 2021 as expected.

Furthermore, there are continuing issues with efforts to protect the democratic process and to safeguard the integrity of elections. A key challenge is that the regulation of elections in the EU consists of a patchwork of EU and national rules, which makes it more difficult to adopt a coherent response to common challenges. Increased coordination at national level (e.g. through national elections networks of relevant competent authorities) and at EU level (e.g. via the European cooperation network on elections and the Rapid Alert System) has been helpful but more needs to be done to tackle foreign interference, cyber-attacks and electoral manipulation. Uncoordinated efforts by Member States to regulate political advertising may obstruct the exercise of fundamental freedoms, with a direct effect on the functioning of the internal market. The complexity of issues and the heterogeneity of rules also create enforcement challenges, as national competent authorities struggle to monitor, discover, and sanction transgressions of rules.

Another layer of complexity concerns the transnational and global nature of online environments, which may require regulators to carefully consider the global implications of proposed interventions and to engage with other legislators around the world.

Policy proposals by experts and stakeholders

Whereas many of the suggestions offered by experts and stakeholders have been taken up in recent EU actions and proposals (such as increasing the transparency and oversight of online platforms and better regulating online ads), there are several proposals that go beyond current discussions.

1. Safeguarding the integrity of elections in the EU

There are voices arguing for a tougher stance on targeted political advertising. According to the European Data Protection Supervisor (EDPS), data protection safeguards are also a prerequisite for fair and democratic elections. The authors of a 2019 study called on data protection authorities to step up their investigations into political microtargeting practices by advertisers, digital platforms and intermediaries. This could be part of a broader approach aiming to give users more power over their data collected online. In its 2022 Opinion on the Commission’s proposal on political advertising, the EDPS recommended a full ban on microtargeting for political purposes. Such a ban has been supported by other stakeholders, such as the European Partnership for Democracy.

Together with other transparency requirements, such as those included in the proposals on the DSA and on political ads, some argued that users should have access to a repository of political and public issue ads that they are targeted with. To minimise the impact of disinformation on European democracy, a 2021 study further recommends regulating political and issue-based advertising at EU level and granting the European Court of Auditors and the European Anti-Fraud Office (OLAF) powers to pursue the investigation of campaign finances, including sponsorship of social media advertisements. Another suggestion is to make contracts between political parties and platforms open for public scrutiny. These could be part of a ‘universal advertising transparency by default’ approach, which was advocated by a large group of NGOs.

Considering the negative effects of automated disinformation, some argued for a ban on the use automated accounts (bots) to disseminate political and public issue ads. To increase cyber resilience, the European Union Agency for Cybersecurity (ENISA) recommended imposing a legal obligation on political organisations to ensure a high level of cybersecurity in their systems, processes and infrastructure. It also suggested classifying election systems, processes and infrastructure as critical infrastructure, so that they become subject to stricter EU cybersecurity requirements.

2. Enhancing regulation and institutional oversight of algorithms used for political purposes

In recent years, there have been plenty of discussions about the kind and breadth of oversight mechanism needed to ensure a healthy online environment. A 2021 study suggested establishing an accountability framework (beyond the DSA proposal) that would include a new authority for online content platforms to supervise the process and organise relations with the various stakeholders, including the community of vetted researchers and relevant NGOs. Other proposals advocating for a specific institutional oversight mechanism include establishing a new EU agency for countering disinformation to better coordinate the EU’s counter-disinformation initiative, and creating a new regulatory body for political advertising.

3. Support citizens, civil society, and research

Another set of suggestions focus on empowering citizens and promoting tools to identify and mitigate online risks. Measures in this category include awareness-raising, improving media literacy, and supporting investigative journalism and fact-checking services. An important element in this strategy is also enabling researchers more broadly to access data to research on the impact of the online environment and automated tools on democracy. Another suggestion is to increase diversity of exposure to online information by promoting a diversity by design principle, where users are encouraged to explore different kinds of information to those they usually prefer. A further suggestion is to create a common European high-quality media service transmitted by contemporary technology, with a view to enhancing EU cohesion and offering a common European perspective. The Commission launched an expert group on disinformation and digital literacy to assist it in preparing common guidelines, to be published in Autumn 2022, for teachers and educators to tackle disinformation and promote digital literacy through education and training.

4. Promote EU standards worldwide

The question of how to address the challenges to democracy in an online environment is widely discussed around the globe, and the EU could help steer a common approach at international level. In this respect, a 2020 study from the Council of Europe recommends considering the establishment of an informal intergovernmental taskforce to facilitate the regular exchange of ideas, practices and legislative and regulatory measures on the influence of foreign media on national elections. Such a taskforce could also work on a common code on political advertising to be applied to licensed services. Achieving harmonisation of standards in this area beyond the EU could contribute to the cooperation between regulators and also help to combat problems of foreign interference. A 2019 study argues for enhanced transnational cooperation with a view to establishing a coherent global framework to regulate disinformation (including at G7 and OECD level). The intensification of the US-EU dialogue on technology governance could also lead to a global oversight framework, based on various public bodies and on a multi-stakeholder approach.

Beyond intergovernmental level, a 2021 European Parliament analysis suggested that the EU support the creation of a new ‘Transparency International for Disinformation’, a dedicated civil society organisation to independently monitor and provide comparable data about disinformation campaigns from target countries.

Position of the European Parliament

Parliament has long supported EU initiatives to regulate digital platforms and political advertising and reinforce EU capacities to tackle disinformation and cyber threats. In its 2018 resolution on the use of Facebook users’ data by Cambridge Analytica, Parliament called for a range of measures, including adapting the electoral rules on online campaigning (i.e. those pertaining to transparency on funding, election silence periods, the role of the media, and disinformation) and to monitor the transparency features in relation to political advertising introduced by the online platforms.

In its 2019 resolution on foreign electoral interference and disinformation, Parliament called on the EU to create a legal framework for counter-hybrid threats, classify equipment used for elections as critical infrastructure and turn the East StratCom Task Force into a permanent structure with more funding.

In the context of the ongoing DSA negotiations, Parliament asked for more transparency over algorithms to fight harmful content and disinformation and for more transparent and informed choices for the recipients of targeted advertising. Also, on 9 March 2022 Parliament’s Special Committee on Foreign Interference in Democratic Processes adopted its final report on malicious foreign interference, asking the Commission to propose a more coordinated European strategy to counter operations by foreign governments that use disinformation. Parliament recommends the creation of a European centre to tackle interference threats, as well as stronger measures to address disinformation on online platforms such as forcing social media platforms to stop boosting inauthentic accounts that drive the spread of harmful foreign interference. Furthermore, Parliament called for the introduction of new measures to ensure cybersecurity and resilience against cyber-attacks, deterrence and countermeasures, and for the protection of critical infrastructure and strategic sectors.

In focus: reinforcing internet capacity and security
To avoid the capacity crunch and keep ahead of the growth in internet traffic, the EU’s strategy is primarily directed at boosting investment in high-capacity broadband infrastructure. To that end, in 2021 the EU set connectivity targets in its Digital Decade strategy and adopted a range of new funding instruments including CEF Digital, which is designed to support the roll out of 5G networks throughout the EU. The Commission is also putting forward an ambitious plan for a space-based secure communication system and satellite traffic management to ensure secure and resilient connectivity across Europe in the years to come.
Furthermore, in 2020 the EU adopted the new EU cybersecurity strategy to better tackle cybersecurity threats that are at the core of internet outage and to safeguard a global and open internet. The amendment of the NIS Directive that is being finalised will impose new cybersecurity requirements on essential entities, including providers of internet services such as the Domain Name System (DNS).
In addition, the EU wants to lead discussions at international level to shape the development of the internet as a space of civic responsibility. The European Commission actively engages in multilateral discussions to shape a resilient, secure and robust internet and promote democracy and human rights. The European Global Gateway initiative launched in December 2021 aims, inter alia, to boost smart, clean and secure digital links around the world. Accordingly, the EU intends to work with partner countries to invest and deploy digital networks and infrastructure (such as submarine and terrestrial fibre-optic cables, space-based secure communication systems and cloud and data infrastructure) to plug vulnerabilities and provide trusted internet connectivity around the globe.

Possible action

Listen to policy podcast ‘Future Shocks 2022: Building a healthier online environment for healthy democracies‘ on YouTube.

Please accept YouTube cookies to play this video. By accepting you will be accessing content from YouTube, a service provided by an external third party.

YouTube privacy policy

If you accept this notice, your choice will be saved and the page will refresh.


Related Articles

Be the first to write a comment.

Leave a Reply