Digital democracy International
Share

BP100: Online Threats to Democratic Debate

Briefing Paper 100: Online Threats to Democratic Debate:

A Framework for a Discussion on Challenges and Responses

Michael Meyer-Resende (Executive Director) and Rafael Goldzweig (Social Media Research Coordinator) wrote this briefing paper.

Executive Summary

Online disinformation in elections has been one of the major themes of the last years, discussed in countless articles, investigations and conferences. With this paper we want to challenge some of the notions and points of focus in the debate, namely:

The Problem

The focus on elections is too narrow. The US presidential elections in 2016 pushed online disinformation into the limelight, and as a result people have often discussed it as a danger to electoral integrity. Elections are a necessary part of democracy, but by no means sufficient. Participation takes place in many other forms. People work in political parties, engage in pressure groups, and demonstrate and share their opinions in many different ways. Journalists investigate and report, politicians discuss, propose and act. These are all essential ways of engaging in a democracy and they happen daily. And every single day these processes may be affected by online disinformation. The focus then needs to be on all these aspects of democracy.

The focus on ‘disinformation’ is often unclear. Many different issues, in particular cyber-­‐security, are conflated with disinformation. Some of these issues have overlaps, but they are not the same. Hacking into accounts or disabling electoral infrastructure is a major problem and it is not easy to defend against, but it does not raise wide-­ranging normative questions. In most cases cyber-­attacks are a crime, or are widely seen as crimes, and the only question is a technical one about how to prevent them. The question of democratic discourse is far-­more complex.

A Wider Understanding of Threats

Nothing less than democratic debate and discourse is under threat. A democracy needs a functioning public space where people and organisations freely exchange arguments. That is why freedom of expression is essential to any democracy, but it is also the reason why all democracies spend money on public broadcasting: they acknowledge that an informed public debate does not emerge by the
mere forces of the market. Democratic discourse needs to be understood widely. It encompasses all exchange of arguments and opinions, in whatever form, and can relate to public policy choices.

Discourse that is relevant to democracies includes a wide range of activity from discussions on deeply‐held beliefs (world views) to simple information that may not affect any opinion, but that may affect politically relevant action (such as finding a polling station, deciding to go there or not; deciding on joining a demonstration).

Why is it necessary to start with things as far‐reaching as worldviews? The answer is that democracy is premised on some common ground. It can live with many disagreements and different interests – indeed, it is designed to allow people to live together peacefully, despite disagreement, but it does need some common ground. If, for example, many people believe that the Earth is flat, they are rejecting
scientific evidence. Without accepting basic assumptions of science, it is simply impossible to discuss most major political questions. Again, this should not be too controversial. Democracies invest
heavily in school curricula that try to establish that common understanding.

We propose a layered understanding of threats to democratic discourse that appear at different levels of opinion and behaviour formation. These range from the level of fundamental beliefs of ethical or religious assumptions to political ideology (conservative? socialist? ecological?), voter choice to behaviour choices (vote or not, and vote where? Demonstrate or not, and where?) that may not even
impact an opinion. Threats to opinion at the deeper levels are continuous, because opinions are formed continuously. Threats to short-­term choices are more likely to emerge around specific events (such as trying to deter people from going to vote by spreading false news about police checks at polling stations). The tech firms’ remedies have focussed more on the short-­‐term threats than the longer systemic threats.

To discuss the entire panoply of challenges, we prefer the term ‘threats’ to other terms like propaganda or disinformation. The latter are mostly used with the assumption that a particular actor is actively and intentionally disinforming. But many threats to democratic discourse are non-­‐intentional. Most importantly, the entire architecture of social media and other digital offers rests on choices that are full of unintended consequences for democracy. Just think of YouTube recommending videos that veer viewers to extremist content. It recommends sensational content to keep users on the platform, but it was not designed to help extremists.

The Phenomena

‘Fake news’ has become the shorthand for all the internet’s ills. As many experts have pointed out, the word has been so abused and means so many things to so many people that it has become useless.
The boom of the term points to a deeper problem of the debate: it has centered on the question of the “message.” Is the message true or false? Is it harmful to specific persons or groups of people? Should
it be censored? These are the questions typically emerging in the debate. The focus on seeing content as the main problem resulted in fact-­checking becoming one of the favourite remedies.

But many problems of online speech are unrelated to the message. When Russian agents bought advertising on Facebook to support the ‘Black lives Matter’ movement, the messages were not the problem. We would not discuss them had genuine members of the movement posted them. The messenger was the problem. When bot networks amplify and popularise a theme or a slogan, the message may not be the problem, nor the messenger, but the messaging is problematic, i.e. the way the message is spread – implying a popularity that does not exist. Imagine a major street demonstration for a legitimate cause and later it turns out that most participants were robots or people paid to participate. We would consider that problematic.

We therefore propose to distinguish three phenomena (“3M”) that need to be discussed in their own rights:

It's Not Only About Freedom of Speech

The focus on the message meant that most debates focus on freedom of speech issues. Viewing the broader issues of threats to democratic discourse across the “3 Ms”, it becomes clear that the rights issues are more complex. The blind spot of legal debates has been the right to political participation and to vote, which presupposes – in the words of the UN’s Human Rights Committee – that public discourse
should not be manipulated. It turns the focus from the expression of opinions to the question about how opinions are formed – the concern that stands behind the financing of public broadcasting by states. It provides the basis for discussing many of the questions related to inauthentic messengers and manipulated messaging/distribution of content. This should not be understood as a facile road to censorship, but rather showing that concerns about social media architecture – what decisions guide what users can see – are based on a human rights concern.

1. Why This Paper?

Ever since the US elections in 2016 and the Cambridge Analytica scandal, there has been a wide-­‐ranging debate on the threats to democracy in the digital space and particularly in social media. Countless conferences, reports and media pieces describe and analyse a large range of issues and challenges. Catchwords abound: Disinformation, computational propaganda, fake news, filter bubbles, dark ads, social bots or inauthentic behaviour, to name but a few.
Building on the work of other organisations, we propose a framework to disaggregate these various phenomena more clearly. We hope that this will contribute to structuring debates and conferences, to
develop practical methodologies to monitor and to respond to threats to democratic discourse online and to discuss regulation.

2. What is the Problem?

How should one describe a desirable online discourse? The tech companies sometimes use frames borrowed from biology. Facebook for example often mentions ‘healthy discourse’1, Twitter’s CEO Jack
Dorsey asked for help to measure Twitter’s health. Words like ‘toxic discourse’ or ‘contamination’ abound. But biology is a bad frame to discuss threats to online discourse.

Social media and the digital sphere are being created by humans. The digital space has no ‘natural’ qualities. The idea of natural qualities confuses the debate. For example, a widely held misunderstanding
suggests that there is a natural way of how posts are seen in social media platforms and there should be no ‘tampering’ with algorithms. Nothing we see in our Facebook, YouTube or Twitter feeds is natural.
It is entirely based on complex algorithms designed by humans to keep users on the platforms and to gain new users, to make the platforms ultimately more attractive for advertisers. If Facebook decides to reduce the reach of a post, it is not reducing its ‘natural’ position. It only gives it less prominence  compared to other posts.2

There is no obvious definition of what a ‘healthy’ discourse may be. For example, in the US the limits to freedom of speech are drawn very wide and include speech that would be characterised as incitement
to racial or religious hatred in many European countries. Neither approach is ‘naturally’ better, both have good arguments on either side. Talking about online discourse using health as a frame implies
that we only need to find the right formula to solve this problem and that it may be a matter for experts more than others. There is no such formula for human debate.

Other authors suggest that the information space should be seen as an ‘order’, meaning that ‘disorder’ is a problem.3 However, especially social media discourse, conducted by millions of people at the same
time, is disorderly and why should it not be? What order would be appropriate and who decides? Much information on social media is irrelevant for democratic discourse and order is not required.

The term computational propaganda is also used and may be useful to describe specific threats, but by implying malicious intent of actors, it is too narrow to describe the full range of threats to democratic discourse online. For example, the above-­mentioned question of how algorithms make choices in ranking posts is as such not a matter of propaganda. It stems from a company’s interest in profit-­making.

We propose the term ‘threats to democratic discourse’. Threats can follow from the intentional actions of people seeking to do harm, but threats can also be the unintended consequences, for example, from the way that social media platforms are designed.

3. What is the Democratic Discourse?

Democratic discourse is the pluralistic debate of any issue that relates directly or indirectly to public policies. A lot of interaction on social media, such as discussion of sports or celebrities, has often no strong relation to public policy and is therefore of no particular interest for a discussion on online threats to democracy.

Then again, in recent years the threat of electoral interference has often narrowed the debate. Democratic discourse is a larger concept than electoral integrity. Political participation in a democracy is
exercised around the clock and not only during elections. Citizens inform themselves, they debate (online or offline), they may demonstrate for issues or they may be active in associations or political parties. Elections are an essential element of democracy, but even the most reduced academic definition includes more than just casting votes.4 More importantly, international law is clear on the set of political rights that make a democracy, which go beyond the right to vote and to stand in elections. They include the freedoms of association, assembly and expression - summarised as political rights.5

Democratic discourse takes place constantly. When public discourse is manipulated, it may not only affect elections, it may equally be targeted at public policy choices. A high-­‐profile example is the sudden, online-­‐generated opposition against the UN Migration Pact. While opposition to the pact is legitimate in any democracy, the campaign showed elements of online disinformation. Massive resistance emerged suddenly at a late stage in the process, when there had been little opposition during the long process of negotiating the pact. Online manipulation may target even deeper roots of democracy. It may attempt to turn engaged citizens apathetic, cynical or fundamentally distrustful of the entire system of democracy.

Therefore, protecting democracies means adopting a wide notion of democratic discourse. If, for example, many people start believing that the earth is flat, a whole range of public policy debates will
become impossible (how do you discuss climate and weather patterns, if you believe the Earth is not round? If many people reject the science of vaccination, how can we discuss health policies?) And
worse: if people believe that all governments, scientists and journalists are part of a conspiracy to conceal the fact that the Earth is flat, they will not meaningfully participate in public discourse. These threats may not result from anti-­democratic intentions.

YouTube recommends videos that are sensationalist because they are more likely to be watched (the company promised to reduce such promotion). That the Earth is flat sounds more interesting than an
explanation that it is not. Our new information infrastructure follows the rules of sensationalist tabloids to catch the attention of viewers and users. This challenges democracy.

In authoritarian states deep distrust in institutions is a sign of realism. In democracies scepticism in institutions is appropriate, but if it turns into conspiratorial thinking or rejection of facts-­‐based debate,
democracy loses its basis. It is for this reason that different levels of human reasoning and behaviour can be threatened, either by disinformation or by the way that online content is organised and presented. These levels include.

  • Worldview/Weltanschauung: The worldview is the deepest level of a personal belief system, for example, a belief in rationality (even if it may not be an absolute belief), religious, moral and ethical convictions. There are far-reaching social science debates on what a worldview is, but for our purposes fit is enough to distinguish between the deepest level of beliefs and assumptions  about the world from political and ideological leanings. For example, a person who believes in relative human progress (“you can improve things”) may turn to various ideologies. She could be a conservative or a liberal, but would be unlikely to turn to more totalitarian ideologies. A person who believes in absolute progress (“if we try hard, everything will become ever better and at some point perfect”) is likely to turn to more utopian (or dystopian) ideologies like communism or fascism. Democratic compromise will feel like treason to that person. A person who turns to religious fundamentalism is unlikely to remain adaptable to democracy. Disinformation and other online manipulation try to weaken democracies’ deep roots at the level of worldviews. They will try to turn citizens into cynics (“I cannot do anything anyway”) or into paranoids, who work against democracy (“I have to bring down the false facades”). Specific myths (such as that of a flat earth or Chemtrails) may seem crazy, but they have a destructive power, because they question everything. More insidiously, the concept of science may not be attacked, but the credibility of scientists is undermined tactically to serve a political purpose, as has been the case with climate deniers. The end result is cynicism and distrust in a professional community that provides essential information for a facts-­‐based democratic discourse. The same is true when they are directly related to critical
    democratic institutions (“all journalists are liars”). If we identify worldviews as a specific target of influence operations, it becomes also clearer where to look for threats. For example, typically adolescents do not yet have firm worldviews, so actors who seek to undermine them would look for platforms that are used by them, such as Instagram or gaming platforms.
  • Political beliefs, ideology: Actors of disinformation try to influence political beliefs and ideologies that usually have an impact on electoral choice and general positioning in public discourse. For example, lobbyists for the coal industry may try to undermine climate scientists, reframing the perception of coal as a ‘green’ natural resource. They do not aim to change somebody’s worldview (the person believes in the need for clean energy), but they try to change their political belief in a specific topic. At this stage disinformation may become propaganda. It may not present false content, but its selection is one-­‐sided to build a political belief (if the only crime that a supposed news site reports are those committed by immigrants, it serves a propaganda purpose, not a news purpose). Fake news sites with such propagandistic purposes remain one of the major challenges for Facebook. Impact at this level prepares the ground to influence the next level of behaviour, namely electoral or other concrete political choices.
  • Electoral and other choices of political action: Disinformation may not aim to influence a political belief, but simply to influence an electoral choice or other choices. The campaign during the US 2016 presidential elections, for example, portraying Hillary Clinton as a criminal, did not try to turn Democratic voters into Republican ones. It signalled to democratic voters: even if you like that party, do not vote for this particular candidate. Operatives of the Democratic Party tried to divide the support for Republican candidates in the Alabama Senate elections in 2018; it did not try to change their political beliefs. The Russian Internet Agency published posts calling for  demonstrations that would not have happened otherwise. It activated existing beliefs, but it did not create or change them. Such threats usually have a more short-­term horizon, for example, aiming at influencing a specific upcoming election.
  • Electoral behaviour: Disinformation may also try to change electoral behaviour without attempting to change the voters’ minds about a candidate or a party. An example would be an ad posted in the 2018 elections in Brazil feigning support for the Workers’ Party, but indicating a wrong election day (one day too late) or misleading pictures that showed police checks at polling stations in the US, potentially deterring vulnerable voter groups who fear the police.

A wide notion of democratic discourse, which includes anything from shaping world views to influencing specific decisions, reflects the importance that discourse has in democracies. This is not a novel idea. Almost all democracies invest significantly in public broadcasting, because they consider impartial  information to be more than a commercial good and that citizens need to engage and be engaged in the public sphere.6

4. Disaggregating Digital Phenomena: Message, Messenger, and Messaging

The discussion on threats to discourse on social media focuses on many different phenomena which tend to be discussed all at once. The Council of Europe’s report on Information Disorder provided important guidance for this debate, but it had a strong focus on “the message”, i.e. the content that is spread online.
Symptoms of a strong focus on message are:

  • The popularity of the ‘fake news’ label
  • The focus of many discussions that seek remedies of fact-­checking
  • The centrality of freedom of speech in the debate

Thus, for example ,the European Commission established an Expert Group on “Fake News and Online Disinformation”, which defined disinformation as “all forms of false, inaccurate, or misleading information”, in other words, a message problem. Consequently, its strategy puts fact-­checking at the center of its response.

The focus on message is too narrow. Content may be unproblematic, but the way it is spread is  problematic. For example, the American ‘Black lives matter’ movement is a legitimate pressure group. When Russian agents bought ads to support it, there was no particular problem with their messages. The problem was the messenger. A foreign country secretly increased the voice of a domestic pressure group to exacerbate tensions. When political parties resort to building elaborate bot networks to amplify their messages, the problem is often not the message (it may be unproblematic), but the manipulation of the perception of popularity. They will become visible, show up as ‘trending’, suggesting that an issue has much popular support. To use a comparison from the offline word, we may not be against a street demonstration, we may even join it, but we would be disconcerted if we discovered that most  demonstrators were robots pretending to be humans.

It is noteworthy in this context that Facebook does not consider messaging/distribution as the main problem (though it has changed policies in this area, too). For example, the company believes that it
largely controls social bots (if they are not hybrids between human and automated action) by deleting such accounts. Its public reports now often focus on the take-down of inauthentic, orchestrated accounts. But Facebook says little about its own decisions on ranking and what its effects may be on displaying content and thereby shaping public opinion.

To distinguish these levels more clearly, we propose to break down the discussion of threats into three components with the third one differing from the Information Disorder report.7

Message/content: The message is the content provided. It may be text, but it can also be a picture, a meme, a film or a recorded message. False messages are part of disinformation and their review and possible debunking is the realm of fact-­checkers. Hate speech, intimidation, incitement of violence are problems that also have to do with the message. Policies of online companies have a lot to do with content, for example the prohibition and take-­down of messages containing terrorism and nudity on Facebook.

Messenger: The person, group or organisation that created and published or posted a message. This may include several players, for example when one person creates a message, but another person publishes it. Here it is important to look at phenomena such as authenticity of messengers, their identity/anonymity, their location and their motivations.

Messaging/distribution: How is a message distributed? Here one would look at issues like artificial boosting of content by gaming algorithms (issue of bot networks, tweaking of Google search results),
the way algorithms work in ranking content (Facebook, Twitter, YouTube), recommending (YouTube) or displaying (Google) content as well as boosting of content for money (targeted ads).

The third component (messaging/distribution) appears useful to discuss phenomena like algorithms that decide the ranking of posts, their manipulation (e.g. through social bots) and boosted content (targeted ads). There may be problems of distribution, even if the message may not be disinformation or the messenger problematic. Problems include the infamous filter bubbles, the promotion of sensationalist content (even if not disinformation) or the trade with data to target people (even if the messages and messengers are as such not problematic).

The table below shows in more detail how the various phenomena relate to these specific levels.

The breakdown into the three Ms – message, messenger, messaging – shows that some problems of message can only be addressed by a focus on messenger and messaging. For example, it is not forbidden
to lie either online or offline. Nobody should be prohibited to claim that the Earth is flat or to claim that the Pope endorsed Donald Trump. However, if algorithms favour such attention-grabbing false messages, so that they are shown to many people, the problem can only be addressed at the level of messaging/distribution.

5. The Neglected Human Rights to Political Participation

Using the framework of the ‘3 Ms’ also exposes blind spots in the legal debate. We support putting human rights at the center of the debate, as argued by many others. As mentioned above, online discourse and its manipulation is human-­made, the law provides a framework to discuss its effects and ways to shape it. Laws are human-­made, they are debated and consulted, and they can change over time.

As digital content and social media are mostly global in reach, international human rights law provides an obvious starting point.8 But the international law debate focuses mostly on the freedom of expression9 and to a lesser degree on the right to privacy. But neither of these two rights provide much guidance on many questions of messaging and distribution, in particular on algorithmic preferences
for certain content over other content.

The unexplored aspect is the right to political participation and to vote and to stand as a candidate in elections, enshrined in Article 25 of the International Covenant on Civil and Political Rights (ICCPR).
Looking at the context in which people participate in politics, Article 25 focuses also on the forming of opinions and not only on their expression.

The UN’s Human Rights Committee, the monitoring body of the International Covenant on Civil and Political Rights) noted in its General Comment on Article 25:

“Persons entitled to vote must be free to vote for any candidate for election and for or against any proposal submitted to referendum or plebiscite, and free to support or to oppose government, without
undue influence or coercion of any kind which may distort or inhibit the free expression of the elector's will. Voters should be able to form opinions independently, free of violence or threat of violence,
compulsion, inducement or manipulative interference of any kind.”10

The mention of undue influence, distortion, inhibition and manipulative interference points to the relevance of Article 25 for the quality of public discourse. Indeed, election observation missions have found elections to be problematic, not because of some technical flaws or fraud in voting but simply because the opposition did not get any (or only negative) coverage in the media.

Given that one of the major concerns of online campaigns are manipulations, such as inauthentic behaviour, Article 25 is an important point of reference. Reducing online manipulation is not a restriction of rights, it is a protective measure to secure political participation.

Importantly, the non‐manipulation language should not be read as meaning that Article 25 would justify any kind of deletion of content or prohibitions. However, it provides a basis for discussions whether social media companies’ (algorithmic) decisions for example on ranking posts or on registering users, enable manipulation or whether they make it more difficult. Yet, so far it has not entered legal debates that were more focused on the nexus of message and freedom of expression.11

A balanced approach would therefore need to take into account of freedom of expression, the right to privacy and the right to political participation across the three levels of international law, national
legislation and the self-­regulation of companies (or ‘co-­regulation’ where states are involved in defining codes of conduct and similar commitments).12

6. Conclusion

The transformation of the public sphere by the digital space in general and social media in particular raises major questions in conceptualising the problem for democracy, the phenomena that need to be addressed and the regulatory framework for responding to these.

In many instances the problem is described too narrowly (electoral interference through false content), when a full debate needs to look at all levels of democratic discourse, all of the time, and not only during elections. It needs to take into account the different challenges that arise at the levels of message, messenger and messaging and look at these through the lens of multiple human rights provisions. There are not many easy and obvious answers on what should be done to make online discourse more  compatible with democracy, but a clear framework for discussion should help getting there.

Refrences:

  1. https://newsroom.fb.com/news/2018/01/hard-­‐questions-­‐democracy/
  2. A strictly chronological display of a feed could be considered natural, but social media
    and networks do not work that way. Messages are displayed chronologically on WhatsApp,
    hence there is no debate on algorithmic sorting in that case.
  3. See the report by Claire Wardle and Hossein Derakhsan, which brought more clarity into the debate. This paper builds on the report while proposing a different emphasis on some issues. Claire Wardle, Hossein Derakhsan, ‘Information Disorder – Toward an interdisciplinary framework for research and policy-­‐making’, Council of Europe 2017. It can be downloaded here:  https://rm.coe.int/information-­disorder-­toward-­an-­interdisciplinary-­framework-­for‐researc/168076277c.
  4. Joseph Schumpeter’s ‘competitive struggle for votes’ is considered the narrowest definition, but even that is about much more than just voting. Many elections do not even qualify for this  minimum definition due to the absence of real competition.
  5. For a detailed overview, see ‘Strengthening International Law to support democratic governance and genuine elections’, April 2012, Democracy Reporting International and The Carter Center. It can be downloaded here: https://democracy-reporting.org/dri_publications/report-­strengthening-­international-­law-­to‐support‐democratic‐governance‐and‐genuine‐elections/.
  6. For example, the Charter on the BBC states: “The Mission of the BBC is to act in the public interest, serving all audiences through the provision of impartial, high-­‐quality and distinctive output and services which inform, educate and entertain.” One of its purposes is “to provide impartial news and information to help people understand and engage with the world around them (…)”.
  7. Page 7 of the report.
  8. Mark Zuckerberg asked for new global regulation – that is not likely to happen anytime soon. Major powers like the US, the EU, China or India do not see eye to eye on fundamental questions. Existing international law is a global framework that can guide the discussion on regulation by states as well as attempts of self-­regulation by the companies.
  9. Countless policy documents on freedom of expression and the internet have been adopted at the international level in the last years. For more on this, see New Frontiers.
  10. UN Human Rights Committee, General Comment 25, 1996, point 19.
  11. Even new draft guidelines on public participation by the Office of the United High Commissioner for Human Rights merely note: “Information and Communication Technologies (ICTs) could negatively affect particiaton, for example when disinformation and propaganda are spread through ICTs to mislead a population or to interfere with the right to seek and receive, and to impart, information and ideas of all kinds, regardless of frontiers.” (point 10). The do not make a link to opinion formation, unintentional manipulation and normative guidance that may emanate from Article 25.
  12. Important cross-­cutting rights issues which affect all three rights mentioned above: non‐
    discrimination, the right to an effective remedy and ‘business and human rights’ obligations. We will explore legal issues in more detail in another paper.

 

Download the Briefing Paper below.

Photocredit: marshal anthonee/Flickr

Documents

BP_Threats-to-digital-democracy Download