Future Shocks 2023: Defending the EU’s democratic information sphere [Policy podcast]

The democratic information sphere has undergone a rapid and profound evolution over the course of the past two decades.

© European Union, 2023, EPRS

Written by Naja Bentzen.

This paper is one of 10 policy responses set out in a new EPRS study which looks first at 15 risks facing the European Union, in the changed context of a world coming out of the coronavirus crisis, but one in which a war is raging just beyond the Union’s borders. The study then looks in greater detail at 10 policy responses available to the EU to address the risks outlined and to strengthen the Union’s resilience to them. It continues a series launched in spring 2020, which sought to identify means to strengthen the European Union’s long-term resilience in the context of recovery from the coronavirus crisis. Read the full study here.

The issue(s) in short: The challenge and the existing gaps

The democratic information sphere has undergone a rapid and profound evolution over the course of the past two decades. The public space for debate – where we not only express our opinions, but also form our opinions, which feeds into our individual and collective decision-making – has the potential to unite, but also to divide people. This complex, multi-layered ecosystem is shaped and impacted by a vast number of strategic and systemic actors, where interests sometimes dominate over democratic values and freedoms.

  • Geostrategic threats: Foreign information manipulation and interference (FIMI) – conducted in an intentional and coordinated manner – aims to manipulate political views and preferences through deceptive information. Actors engaging in FIMI are typically authoritarian state or non-state actors, including proxies inside and outside their own territory. Russia, China and Iran are among the most visible and increasingly aggressive actors. These actors coordinate their narratives – typically designed to undermine democracy – and export their tools to other countries and continents. They also use proxies, including outside their territory, to further their goals while maintaining deniability. Increasing efforts to game – and thereby undermine – the multilateral system, with the United Nations as a prominent example, is part of the strategic erosion of international decision-making processes.
  • Undermining the integrity of elections – and thereby public trust in the electoral system and democratic institutions – erodes democracy as a system. Two key elections in 2024 – for the European Parliament in June and the US elections in November – will create ample opportunity for information campaigns and interference by authoritarian actors, both with a view to distracting from their own failures and to achieving a certain desired outcome. US domestic political polarisation could play into this.
  • Systemic threats: Distortion of the information space – impacting public opinion as well as individual and collective decision-making – through filtering, ranking and recommendation of content and interactions (for example, via newsfeed algorithms, de/prioritisation or removal/amplification of content and accounts). Engagement is generated by algorithms that prioritise polarising content over facts, pushing users towards extremes (‘algorithmic radicalisation’). This is, at least partly, a consequence of the business models of platforms that dominate the online ecosystems.
  • Societal vulnerabilities: Interlinked with systemic and strategic threats, societal weaknesses increase vulnerability to manipulative information, which uses emotional content to trigger and maintain engagement; people are more prone to deceptive messages if they feel that the system is not working for them. Mental health issues – in part triggered by social media and accelerated by the pandemic – have created a profound risk of harm to children and youths. This could feed into a vicious circle of addictive behaviour and vulnerability to deceptive information, with algorithmic curation pushing some people towards extremist messaging. Computer games are increasingly used by extremist actors to groom and radicalise users.

Across the world, there is increasing awareness of and concern about deceptive narratives spread by a growing number of state and non-state actors and enabled by global tech companies. Correspondingly, the need to find cross-border responses that take different aspects and actors into account is pressing. Globally, the EU has the potential – via its important internal market and its standard-setting power (the so-called ‘Brussels effect’) – to promote, represent and defend its democratic values, extending far beyond Europe. However, international and multilateral cooperation to tackle the multidimensional threats from actors who supply and enable information manipulation is facing a number of challenges.

Proportion who saw false or misleading information about each topic in the last week – selected regions
Figure 51 – Proportion who saw false or misleading information about each topic in the last week – selected regions

At the same time, efforts to reduce the demand for deceptive narratives in a strategic manner would require pre-emptive responses to predictable developments. This includes addressing drivers of polarisation such as growing economic inequality – a key perceived threat to democracy – as a result of job losses to AI-powered services. This could, as similar developments have done in the past, further exacerbate status anxiety; the fear of downward mobility that historically risks providing fertile ground for authoritarian tendencies and messaging. Moreover, there are increasing calls for digital and media literacy across generations (not only young people) to equip people to detect deceptive tactics, a threat that is expected to be exacerbated by AI-powered information manipulation.

At their December 2022 summit, the EU and the US agreed to promote their values worldwide via an open, free, global, interoperable, reliable and secure internet, as reflected in the Declaration for the Future of the Internet, which has been signed by more than 70 partners so far, including the EU and its Member States. Moreover, they agreed to seek to eliminate the use of arbitrary and unlawful surveillance that targets human rights defenders. Transatlantic cooperation on internet freedom and internet governance – including within the UN framework, which would include the Member State level – will be key to advancing democratic norms and standards. As the importance of transatlantic cooperation to respond to the evolving threats to our joint information sphere will continue to grow, it will be crucial to counter the repercussions of the growing polarisation and what some experts call ‘truth decay’ in the US, which can affect legislative responses, including in Congress, and call into question important institutions involved in this work.

Position of the European Parliament

Foreign information manipulation and interference: The European Parliament has consistently, and with broad political consensus, been pushing the issue of a European response to information manipulation and foreign interference to the top of the agenda, urging the EU to provide sufficient tools and resources to respond adequately and in a coordinated manner. Most recently, the second Special Committee on foreign interference in all democratic processes of the European Union, including disinformation, and the strengthening of integrity, transparency and accountability in the European Parliament (ING2), have significantly increased the visibility of the related threats, and broadened and deepened the understanding of and focus on the interlinked challenges. The scope of ING2 was further expanded in early 2023 to include threats to Parliament’s integrity and transparency. Parliament’s strategy for the 2024 elections includes a focus on preventing and addressing information manipulation, without interfering in the political or wider social debates, with full respect for the independence of the Members’ mandate.

Stricter rules on political advertising: Parliament’s mandate for negotiations with the Council proposed a number of changes to the Commission’s proposal – for example, excluding political views expressed under editorial responsibility from the concept of ‘political advert’. MEPs also proposed banning the financing of political advertising services by non-EU sponsors that reside or are located outside the EU. Moreover, they called for easy access of citizens, authorities and journalists to information on political advertisements, including creating an online repository for online political advertisements and related data. Additional changes include reinforcing obligations for providers of political advertising services in the last month preceding an election or a referendum, banning the use of targeting and ad delivery techniques involving processing of sensitive personal data, and limiting the use of those techniques when they involve the use of non-sensitive personal data.

Artificial Intelligence and information manipulation: In June 2023, the European Parliament adopted its negotiating position on the Artificial Intelligence Act (AI Act); MEPs expanded the list of high-risk AI systems, adding AI systems to influence voters in political campaigns and in recommender systems used by social media platforms (with more than 45 million users) under the Digital Services Act (DSA). According to Parliament’s position, generative foundation models, like GPT, would have to comply with additional transparency requirements. This would mean disclosing that the content was generated by AI, designing the model to prevent it from generating illegal content, and publishing summaries of copyrighted data used for training. Furthermore, Parliament wants to impose an obligation on providers of foundation models to ensure robust protection of fundamental rights, health, safety, the environment, democracy and the rule of law. MEPs also added safety mechanisms by making it easier for citizens to file complaints about AI systems and receive information about decisions based on high-risk AI systems that impact their rights. Moreover, MEPs proposed to set up an EU AI Office with its own legal personality, funding and staff. The EU AI Office would be tasked with monitoring the implementation of the AI Act.

European Media Freedom Act (EMFA): Parliament’s Committee on Culture and Education (CULT) presented its draft report on the EMFA on 26 April, with the deadline for amendments set for 5 May 2023. The CULT committee’s draft report sparked criticism from a number of prominent European media freedom groups (see below). The vote on and adoption of the report in the CULT committee is planned for September 2023, followed by a vote in plenary, possibly in October 2023.

In focus: Foreign information manipulation and interference
Following the creation of the European Parliament’s Special Committee on Foreign Interference in all Democratic Processes in the European Union, Including Disinformation (INGE), the European External Action Service (EEAS) developed the concept of foreign information manipulation and interference (FIMI). The visibility of the threats related to FIMI increased during the pandemic and were further exacerbated by Russia’s full-scale war of aggression against Ukraine, launched in February 2022, as well as the global repercussions of the escalating information war. The capacity of the EEAS to address related challenges has expanded significantly since 2015, when the problem first appeared on the EU’s political agenda. In addition to a more precise understanding and diagnosis of the problem – from ‘fake news’, to ‘disinformation’, to FIMI – the EEAS has been developing and improving the means to prevent, deter and respond to FIMI. It has done so in close contact and collaboration with other EU institutions, Member States, international partners such as the G7 and NATO, civil society organisations, academia, journalists, media and private industry. In addition, the three Strategic Communications Task Forces cover and help respond to FIMI activity in the Eastern Partnership, the Southern Neighbourhood and the Western Balkans. The EEAS is also working to protect its common security and defence policy (CSDP) missions abroad and build the capacities of EU Delegations to address FIMI.

Pyramid of instruments at the disposal of the EU and its Member States
Figure 52 – Pyramid of instruments at the disposal of the EU and its Member States

EU policy responses (Commission and Council responses so far)

Foreign information manipulation and interference: The actions by the EEAS to counter FIMI constitute the core executive response to related threats. Since 2015, the EEAS has been the key EU driver behind actions to counter (Russian) disinformation, and continues to expand its remit as the main actor in developing and implementing measures and actions in line with the evolving understanding of the threats to the EU. This work feeds into the EU’s overall framework to tackle FIMI, in particular through the evolving FIMI toolbox.

Building on the 2020 European democracy action plan, the 2022 Strategic Compass called for the EEAS to further develop the EU’s FIMI toolbox, and include this in the CSDP missions and operations. In July 2022, the Council welcomed the FIMI toolbox and called for more systematic use of the full range of available tools, such as situational awareness – among others, through the Rapid Alert System and the Single Intelligence Analysis Capacity, in particular its Hybrid Fusion Cell. Most recently, the defending democracy package – announced in February 2023 – aims to cover the review of the implementation of the European democracy action plan and look into ways to further strengthen democratic resilience, taking into account the recommendations of the Conference on the Future of Europe.

Media freedom: Reflecting the acknowledgement that complementary tools are needed at EU level to counter growing politicisation of the media in some Member States, the Commission presented the EMFA in September 2022, together with a recommendation. The proposed EMFA builds on the Audiovisual Media Services Directive and seeks to set rules to protect media pluralism and independence in the EU, including safeguards against political interference in editorial decisions. In the Council, the proposal is being discussed within the Audiovisual and Media Working Party; a progress report was presented in November 2022 and a second one in May 2023 at the Education, Youth, Culture and Sport Council. In the Parliament, the draft legislative report was presented in the CULT committee in April 2023. A vote on and adoption of the report in the CULT committee is planned for September 2023, and Parliament will possibly vote in plenary in October 2023.

Addressing vulnerabilities: The Commission funds the Radicalisation Awareness Network (RAN) – an umbrella network for practitioners working on preventing radicalisation and violent extremism across Europe. RAN facilitates the exchange of ideas, knowledge and experience among field experts, social workers, teachers, NGOs, civil society organisations, victims’ groups, local authorities, law enforcement authorities and academics. Within RAN, a special Communication and Narratives Working Group (C&N) focuses on online and offline communication that counters extremist propaganda and/or challenges extremist ideas.

Political advertising: In the context of its 2020 European democracy action plan, the Commission announced its intention to complement the rules on online advertising included in the DSA through a legislative proposal on sponsored political advertising. The proposal was presented by the Commission on 25 November 2021 and draws on previous EU initiatives to ensure greater transparency in political advertising. Trilogue negotiations are ongoing.

Artificial Intelligence: The Commission unveiled a proposal for a new AI Act in April 2021, aiming to enshrine in EU law a technology-neutral definition of AI systems. The proposal tailors the rules according to four levels of risk: unacceptable, high, limited and minimal. ‘Unacceptable risk AI’ – harmful uses of AI that contravene EU values (such as social scoring by governments) – will be banned. ‘High-risk AI’ refers to systems that adversely impact people’s safety or their fundamental rights. The proposal envisages transparency obligations for systems that (i) interact with humans, (ii) are used to detect emotions or determine association with (social) categories based on biometric data, or (iii) generate or manipulate content (‘deep fakes’). People must be informed if they interact with an AI system, or if their emotions or characteristics are recognised through automated means. If an AI system is used to generate or manipulate an image or audio or video content that appears authentic, there should be an obligation to disclose that the content is generated through automated means, with exceptions for legitimate purposes (law enforcement, freedom of expression). This would allow people to make informed choices or step back from a given situation. In June 2023, the Commission called on online platforms that are part of its strengthened code of practice to identify and label AI-generated content to tackle disinformation, to make it easier for people to spot manipulated information.

Transatlantic cooperation: The EU-US Trade and Technology Council (TTC) was created in 2021 as a key forum for EU-US coordination on key trade and technology issues to include transatlantic cooperation based on shared democratic values in line with the 2020 ‘new EU-US Agenda for Global Change‘ (JOIN(2020) 22 final). According to the joint statement issued after the fourth ministerial meeting in Luleå on 30-31 May 2023, the EU and US agreed on shared standards for structured threat information exchange and on a common methodology for identifying, analysing and countering FIMI, to be made available to stakeholders globally. Moreover, the EU and the US agreed to explore ‘further support for capacity building’ in countries in Africa, Latin America and the EU Neighbourhood to counter FIMI. TTC cooperation also includes a call to action for online platforms operating in Africa, Latin America and EU Neighbourhood countries to ensure the integrity of their services and to effectively respond to disinformation and FIMI, building on the example of the EU’s code of practice on disinformation.

The EU and the US also reaffirmed their commitment to a risk-based approach to AI to advance trustworthy and responsible AI technologies, and agreed to treat generative AI as a matter of urgency and to work together to swiftly produce a draft Code of Conduct for AI to be signed on a voluntary basis, including by governments from other regions. This comes in the wake of the launch of OpenAI’s ChatGPT in November 2022, which prompted mounting calls – including in the US – to urgently regulate AI. Although a number of prominent US tech actors have called for a moratorium on training large AI systems, the sector and its investors appear at the same time to be engaged in an ‘arms race’ on AI that has accelerated dramatically since the launch of ChatGPT.

Timeline on defending the EU's democratic information sphere
Figure 53 – Timeline on defending the EU’s democratic information sphere

Please accept YouTube cookies to play this video. By accepting you will be accessing content from YouTube, a service provided by an external third party.

YouTube privacy policy

If you accept this notice, your choice will be saved and the page will refresh.

Obstacles to implementation of response

Geostrategic obstacles

In the face of increasing pressure on the democratic information sphere – including strategic pressure from foreign authoritarian actors who are increasingly cooperating to undermine liberal democracies – the importance of multilateral, international and transatlantic cooperation will continue to grow. At the same time – also given the important role and visibility of the European Parliament and the US Congress, especially in the context of the upcoming elections in 2024 – threats to and setbacks for democracy on both continents can impact not only the overall credibility of democracy as a system, but also hamper global efforts to counter information manipulation and strengthen the joint information sphere. Moreover, AI-driven surveillance tools – putting journalists at risk of automated suppression – could further exacerbate this crisis of the democratic information sphere.

A related dimension of the multispectral threats to the information sphere is the practical, strategic cooperation on and export of internet censorship and surveillance tools from authoritarian actors such as China and Russia, which is furthering these countries’ geostrategic interests by normalising the use of these technologies and standards. Against this backdrop, coordinating responses both within the EU and with like-minded partners – including NATO and the G7, as well as within the UN framework – will be of increasing importance, especially in the face of what appears to be growing coordination between authoritarian actors regarding the spreading of deceptive narratives. The push for more accountability and for the strengthening of democratic values in the information sphere has become an integral part of EU diplomacy in recent years. The pressure on democracies from within will continue to challenge transatlantic and international cooperation and coordination, likely making these efforts even more delicate.

Given the broad scope and the variety of actors involved in strategic and systemic challenges to the information sphere, one of the greatest obstacles to achieving a coherent and holistic response in the future will be coordination and – linked to this – an adequate level of trust between key actors that should drive the global response. Such trust issues can be further exploited and weaponised by external authoritarian state actors with a geostrategic interest in weakening the democratic response to information manipulation and interference. Internationally, differing interests can slow down or hamper the efforts to rein in some of the market forces that contribute to polarising debates. Moreover, political forces in the US that also contribute to polarising debates could hamper transatlantic efforts to counter deceptive narratives.

FIMI: Creating or amplifying fissures within democratic countries has been part of the Kremlin’s toolbox for years. The use of proxies to create plausible deniability and export narratives and tools to non-state actors, including within democracies, are fixtures in the authoritarian playbook. The mandate and strategic priorities of the EEAS limit the institutional focus and implementation of measures against state actors, notably Russia and China. However, given the blurring boundaries between state- and non-state actors, and between domestic and foreign actors, the strict focus on certain countries makes it easier for such state actors to exploit grey areas.

Systemic obstacles

Transatlantic tensions over tech regulation: In May 2023, two rulings by the US Supreme Court decided, in line with the big tech companies’ position, not to deviate from the immunity principle granted to internet providers for the content they channel under Section 230. These rulings make it clear that it is the internet service provider’s ‘house rules’ rather than public authorities or lawmakers that governs what is objectionable. In a separate development on the other side of the Atlantic, the EU’s decision in May 2023 to fine Meta €1.2 billion for privacy violation exemplifies the persistent gulf between democratic values and freedoms on data privacy. Ongoing issues in Twitter under Elon Musk’s leadership have brought the clash between corporate interests and democratic values to the fore, most recently in the company’s decision in May 2023 to leave the EU’s voluntary code of practice on disinformation. Other online platforms have also cut staff and scaled back on content moderation: Meta’s decision before the launch of Threads – a new platform that amassed over 100 million users in five days – to lay off global staff working to counter disinformation and coordinated troll campaigns has sparked concern over potential new waves and avenues for information manipulation ahead of the important 2024 elections in the US and Europe.[i] The focus in Washington on securing the US’s competitive and strategic edge vis-a-vis China could be increasingly used as an argument to push back against EU regulation.

Threats to the implementation of the DSA: Some stakeholders have expressed concern that exemptions for media content in the EMFA – which is still under negotiation – could undermine the implementation of provisions in the DSA that would otherwise make online platforms responsible for content moderation. Under this exemption, if any self-declared ‘media’ published false information via Twitter, Facebook or TikTok, the platforms would need to contact the ‘media’ and inform them about a fact check or takedown, which would, in practice, prevent timely and effective content moderation of viral disinformation.

As part of the broad spectrum of systemic obstacles, economic threats to the online media ecosystem – as noted by the European University Institute (EUI) – require action, although they go beyond the scope of the EMFA: ‘economic threats that have increased in the online ecosystem of the media, in which the resources that are used to finance the media content providers – advertising – are increasingly gathered by the digital intermediaries’. The EMFA does not address public subsidies to the media, ‘neither calling for them nor addressing the related risks of political interference’. The authors note that the Digital Markets Act – covering the data and online advertising market – and EU financial support programmes for the media sector address the issues, but recommends using the EMFA as an opportunity to coordinate legislative and regulatory tools.

Societal obstacles: On the demand side of viral deceptive content, anxiety, loneliness, stress and declining trust in media, democratic institutions and political leaders can increase the vulnerability of citizens to deceptive, emotional messages. Societal grievances – creating fertile ground for deceptive information campaigns – could be further exacerbated by increased climate change-driven migration, increased inequality as a result of inflation and economic crises, as well as massive job losses as a consequence of the rollout of AI large language models (LLMs). According to some estimates, 40 % of working hours could be impacted, which could result in a significant decline in clerical or secretarial roles, according to the World Economic Forum. At the same time, copyright issues, combined with the abundance of information online, news avoidance – connected with overabundance of information of fluctuating quality – and declining ad revenues for news media, could accelerate the mounting pressure on journalism.

Please accept YouTube cookies to play this video. By accepting you will be accessing content from YouTube, a service provided by an external third party.

YouTube privacy policy

If you accept this notice, your choice will be saved and the page will refresh.

Policy gaps and pathway proposals

Given the cross-border nature and broad spectrum of information manipulation, following emerging narratives from multiple actors – beyond the ‘usual suspects’ such as China and Russia – would better equip the EU to prebunk rather than debunk deceptive information once it has already started to spread. Since one democratic country’s domestic actors can be another country’s foreign actors, expanding the mandate of the EEAS – which, by nature, is limited to external actors – to take the global, cross-border and multiactor threats into account would facilitate the forecasting of information threats. Strategic foresight to pre-empt future corrosive narratives and which shares not only findings, but also forecasts, with the public can – in combination with media and information literacy, targeting all age groups – strengthen prebunking efforts to boost collective cognitive resilience.

Imposing costs on state actors that engage in FIMI hinges on attribution capacity. The final ING2 report on FIMI, adopted on 1 June 2023, proposed that the FIMI toolbox should include a specific sanctions regime on FIMI as well as measures to strengthen the attribution capacity of European institutions and national governments. MEPs underlined the corrosive phenomenon of disinformation-for-hire, services to government and non-government actors – typically via the dark web – to attack electoral processes and called for a permanent body in the European Parliament to ensure effective monitoring. Moreover, they called for increased protection for media and journalists who are targeted by foreign powers to undermine the right to information, as well as ‘mirror clauses’ where the openness of the European information space would depend on access given to European media in other countries. In addition, MEPs called for an EU-wide regulatory system to prevent editorial control of media companies by foreign governments, and to prevent foreign high-risk countries from acquiring European media companies, using existing foreign direct investment screening mechanisms.

Coordination is key to successful cooperation at all levels – intergovernmental, interinstitutional, and with all relevant stakeholders. To this end, in February 2023 the EEAS proposed to standardise information on threat actor behaviour and infrastructure, including ensuring a consistent framework for sharing insights on FIMI incidents. Moreover, the EEAS proposed the creation of Information Sharing and Analysis Centres (ISACs) – trusted entities to foster information sharing and good practices on threats and mitigation – to pool insights from the organisations that identify and expose manipulative activity using common frameworks and standards.

Election integrity: Ahead of the European Parliament elections in May 2024, followed by the US elections in November 2024, efforts to encourage EU and Member State election candidates and parties to make pledges of electoral integrity, and political incumbents to pledge not to engage in online manipulative practices, could be promoted to candidates, parties and stakeholders in the US.

AI pact until the AI Act can be applied: European Commissioner for the Internal Market Thierry Breton – following a meeting with Sundar Pichai, CEO of Google and its parent company Alphabet – announced on 24 May 2023 the decision to develop a voluntary AI pact with European and non-European companies to bridge the time until the EU’s AI Act is ready for implementation.

Media legislation: In January 2023, the European Broadcasting Union called on the EU to ‘ensure that the final EMFA will help to tackle threats to media independence and improve audiences’ ability to access the media that matters most to them, both offline and online’ by protecting and promoting the independence of media and journalists; ensuring that citizens can easily discover and find media services of general interest; and tackling arbitrary behaviour by global platforms towards media content.

In a 10-point plan to address our information crisis, the 2021 Nobel Peace Prize laureates and journalists Maria Ressa and Dmitry Muratov called on the EU to ensure that no media exemption be included in any tech or media legislation. The EUI recommended using the EMFA as an opportunity to coordinate legislative and regulatory tools, including the Digital Markets Act, covering data and online advertising.

The CULT committee’s draft report on the EMFA sparked criticism from a number of prominent European media freedom groups, citing in particular the removal of almost all references to editorial independence in the proposal and the insertion of media owners’ right to assume a leading editorial role (Article 6.2); the insertion of VLOPs into the media plurality assessment and the exchange of a mandatory nature for a voluntary one (Article 21); and the failure to strengthen media ownership transparency rules (Article 6.1).

Reduce societal vulnerabilities in a strategic manner: Using the security lens, address the root causes of divisions and vulnerabilities, including increasing economic inequality, partly caused by structural job losses.

Boosting inclusive participatory democracy: In addition to other diplomatic efforts to promote democracy, the EU could initiate a Conference on the Future of Democracy – building on the experience from the Conference on the Future of Europe – as a global exercise in participatory democracy to engage citizens worldwide in a debate on challenges to democracy, as well as potential solutions.

Transatlantic cooperation: Big tech companies and internet providers – many of them from the US – play a key role in advancing or hampering democratic norms and standards. Increased cooperation between the US and the EU – including in the EU-US Trade and Technology Council – to manage the inherent tension between interests and values at the intersection of government and the corporate sector will be key to sustaining democratic norms and standards in the long term. Such efforts will include investment in technologies that further internet freedom in the face of rampant internet censorship and surveillance – the flipside of authoritarian (state) efforts to manipulate the information sphere.

Strengthening the global information sphere and increasing cooperation and coordination with like-minded partners to boost the media ecosystem in third countries: Within the Global Gateway framework, the EU – in cooperation with like-minded democracies, including the US, the UK, Australia, Japan and Canada – could increase strategic investment in strengthening the media ecosystem across the world, including making European news agency services available to local and regional media in the ‘Global South’. Moreover, the EU could invest strategically in local news in its neighbourhoods and across the world, coordinating with and complementing ongoing efforts by democratic allies. Boosting the media and information landscape, including in sources that provide access to general-interest knowledge (for example, verified encyclopaedias) in key languages spoken in and beyond Europe – including Spanish, French, Arabic and Russian – could contribute to collective cognitive resilience not only within the EU, but also in third countries.

Multilateral cooperation: Close coordination within the UN on a future Global Code of Conduct for Integrity in Public Information could improve the chances that democracies gain an edge over the increasing autocratic coordination within the UN system that would further weaken multilateral and international decision-making. In this context, the EU could boost its diplomatic efforts to promote democracy, including promoting responsible state behaviour online. The European Parliament – as a flagship for multinational democracy, and with its significant tradition of support for democracy in mind – could play a more visible role in promoting the parliamentary dimension of such a push.

Possible action

Defending the EU's democratic information sphere  - Possible action

Listen to policy podcast ‘Strategic and systemic threats to the democratic information sphere‘ on YouTube.

Please accept YouTube cookies to play this video. By accepting you will be accessing content from YouTube, a service provided by an external third party.

YouTube privacy policy

If you accept this notice, your choice will be saved and the page will refresh.

Related Articles

Be the first to write a comment.

Leave a Reply