Written by Costica Dumbrava.
Democracy relies on citizens’ abilities to obtain information on public matters, to understand them and to deliberate about them. Whereas social media provide citizens with new opportunities to access information, express opinions and participate in democratic processes, they can also undermine democracy by distorting information, promoting false stories and facilitating political manipulation. Social media risks to democracy can be classified according to five aspects that generate risks surveillance, personalisation, disinformation, moderation, and microtargeting.
Firstly, social media provide new and more effective ways to monitor people online, which can be used by governments to target politically active citizens and silence dissent (political surveillance). Even in the absence of explicit coercion, citizens who suspect they are the target of online surveillance may suppress their political expression online for fear of retribution. The massive collection of data by social media creates privacy risks to users and may affect their capacity to form and express political opinions (loss of privacy and autonomy). The attention capture model used by social media seeks to exploit human needs and biases in order to increase engagement, but at the same time it undermines individual autonomy. Social media may also contribute to citizens’ decreasing levels of interest in politics, even if they are not directly responsible for this (political disengagement). Certain effects of social media are a by-product of a particular business model focused on engagement at all costs. This indifference of social media to democracy contrasts with the fact that they have a growing impact on democracy.
Secondly, the promotion of personalised content on social media may lock citizens in informational bubbles, thus affecting their capacity to form opinions (narrowed worldviews). Whereas content personalisation can help citizens deal with the problem of information overload, it can also limit the range of information available to them. Moreover, the segmentation of information and engagement may reinforce group boundaries and reduce opportunities for political dialogue (social and political fragmentation). Yet, despite widespread concern, existing empirical evidence suggests that the personalisation and filtering effects of social media are less severe and pervasive than initially feared. Whereas the negative political effects of personalisation seem less severe and widespread, the risk of societal fragmentation and polarisation remains. It must be noted that evaluations of the political effects of social media may also depend on political (ideological) assumptions about the nature and conditions of democratic politics.
Thirdly, the spread of false information on social media can undermine citizens’ capacity to form and express political views (distortion of political views and preferences). Despite growing evidence of people’s significant exposure to political disinformation online, the actual impact of disinformation on their views and preferences is difficult to assess. Although the reach and impact of disinformation seem to have been over-estimated, there is evidence of negative effects in particular contexts and on specific groups. Disinformation can be used to persuade or confuse voters and to mobilise or demobilise citizens to cast a vote, which may, in certain conditions, be a determinant of election outcomes (distortion of electoral outcomes). Importantly, widespread disinformation and acute public perception thereof (amplified by lack of research and inadequate reporting) may undermine trust in (all) online information and democratic institutions. Despite recent media attention being focused primarily on disinformation disseminated by foreign actors (e.g. foreign governments or intermediaries seeking to influence electoral outcomes in another country), disinformation is also spread by domestic actors (e.g. political parties and politicians seeking to influence pulblic opinion in their own country). Sometimes, this happens as a result of entrepreneurs promoting highly engaging content to make profits from selling ads. Moreover, automated accounts and algorithms contribute to the spread of disinformation on social media (automated disinformation). However, effective disinformation campaigns are a result of a complex interaction between humans and algorithms. For example, automated tools for spreading false information exploit human biases and predispositions, such as human confirmation bias, inclination to believe repeated stories, and attraction to novel content.
Fourthly, efforts by social media platforms to tackle disinformation and other forms of deception online may undermine users’ freedom of expression and enable control over public opinion (political censorship). Whereas all moderation measures are risky, content removal is particularly problematic when targeted content is not explicitly illegal. Deleting and labelling content can be counterproductive, as it may reinforce perceptions about unfair and unjustified censorship of particular views and groups. Whereas automation can alleviate some burdens of human moderation, it can also amplify errors and automate pre-existing bias (algorithmic bias). Increased pressure on social media to tackle problematic content may push platforms to rely even more strongly on automated tools, which leads to more censorship and bias. Despite efforts to make moderation more transparent and systematic, moderation measures adopted by social media remain largely unclear, arbitrary, and inconsistently applied. The risk is that social media platforms take decisions with significant consequences for individuals and democracy without proper accountability (lack of accountability).
Fifthly, social media platforms rely on a variety of user data to profile people and sell targeted advertising (microtargeting). Whereas political microtargeting can serve to re-engage citizens in politics, it can also be used to manipulate citizens’ views and expectations (political manipulation). The covert or hidden nature of microtargeting increases the risk of manipulation and thus undercuts citizens’ capacity to form and make political choices. Political microtargeting also challenges existing electoral rules concerning transparency, campaigning and political funding, and can distort elections (distortion of the electoral process). Whereas evidence about the widespread use of political microtargeting is growing, its actual impact remains uncertain. Given the nature of political competition, it is possible that political microtargeting campaigns can determine the outcome of elections, in particular in winner-takes-all electoral systems. Even if microtargeting cannot be blamed for tipping recent elections, the risks it creates are likely to increase, given the high political and economic interests at stake and future technological advances.
The EU already has laws and policies in place to tackle many of the social media risks to democracy (for example, strong data protection rules) and is spearheading efforts to counteract new challenges (such as new legislative proposals on digital services). There are seven key approaches to tackling social media risks to democracy.
EU competition measures can be used to further combat abuses of market dominance, for example, by controlling social media platforms’ ability to integrate behavioural data from various services and advertising networks and by promoting data portability and interoperability solutions to reduce the cost of switching between platforms. Further clarification and stricter enforcement of EU data protection and digital privacy rules can help to prevent abuses of personal data and provide safeguards for fair and democratic elections. Amid widespread calls for increasing social media responsibility for promoted content, there is an ongoing reflection on the need to review and clarify EU content liability rules on online content. Special attention has also been given to increasing transparency and accountability of online platforms for filtering and moderating content, including for the use of algorithms. The EU is gradually moving towards a co-regulatory approach that would require social media platforms to assume stricter transparency and accountability obligations. Specific rules are also forthcoming to prevent abuse and manipulation through targeted political advertising. Lastly,addressing the social media risks to democracy cannot succeed without empowering citizens to understand and fend off online risks, for example, by improving digital literacy, promoting citizen-centred approaches to tackling online challenges, and supporting public-oriented institutions such as independent media.
Read this ‘in-depth analysis’ on ‘Key social media risks to democracy: Risks from surveillance, personalisation, disinformation, moderation and microtargeting‘ in the Think Tank pages of the European Parliament.