Digital democracy European Union
Share

Anticipating the Storm: Mapping Digital Threats to the 2024 European Parliament Elections 

Background

On April 4th, DRI held an online exchange (90 minutes) with EU institutions, platforms, international organisations and NGOs to assess risk to the information space before this year’s European Parliament elections. The 34 participants represented DRI, the European Commission (DG Connect), the European External Action Service, the Secretariat of the European Partnership for Democracy, the Institute for Strategic Dialogue, International IDEA, Council for Media Services, Aspen Institute Germany, Global Disinformation Index, Expert Forum Romania, the German Marshall Fund of the United States, GlobeSec, Demagog, Das NETTZ, Arcom, EDMO, DigitalHub, AI Forensics, TikTok, and YouTube.

After a brief introduction, we divided the attendees into breakout group discussions, each focusing on a separate online threat. Participants were encouraged to share their thoughts, observations, and predictions about each topic in the context of the EP elections.

Below are the takeaways from each group: 

Hate Speech 

  • Researchers continue to identify that certain groups are more affected by hate speech online than others, especially: young women, people with a migration background, and members of the queer community. The gender component in online hate speech is observed to be very high, especially with regards to explicit content. 
  • In certain countries (for example Slovakia), researchers are noticing more and more online hate speech being directed at prominent women politicians and journalists. Platforms will sometimes dismiss or take little action against this type of hate speech, claiming their identity as “public figures” disqualifies them from an otherwise protected status. 
  • Analyses of hate speech often focus heavily on text (posts, comments, etc.) or explicitly illegal content. Less analysis has been done into the spreading of harmful content in the form of memes images and videos. 
  • Meme images and videos are overlooked sources of hate speech because they often require in-depth knowledge of group-specific rhetoric and jokes, many of which can appear inoffensive to unfamiliar observers (dogwhistles).  
  • Navigating country-specific definitions of hate speech is difficult for both platforms and researchers. In cases where politicians or official news outlets are the source of such speech, even if it is legal, platforms may be able to take action if it violates their own content policies.  
  • As it currently stands, it is difficult for platforms and observers to assess the impact the DSA is having on hate speech, specifically around the EP election. The next batch of regular reporting by the platforms may shed more light on the question. 
  • To reduce illegal content, politicians need to be held accountable for their speech online, especially in country contexts that receive less attention from the platforms. In the case of Slovakia, some sitting politicians have been major sources of hate speech. 

Disinformation 

  • Data access remains a key issue in the fight to tackle online disinformation. Many EU civil society and fact checking organisations still lack “trusted researcher” status or struggle with data access according to art.40 para.12 of the DSA ahead of the elections and are therefore less equipped to identify sources and trends in disinformation. Almost all firms changed their data access policies recently. 
  • While cross-platform cooperation is foreseen in the Code of Practice, there is a sense that it is not happening systematically in reality. The link of social media and messaging app is seen as a another weakness. 
  • In some cases, researchers observed disinformation actors becoming political candidates, sometimes aligned with radical perspectives and pro-Russian disinformation narratives.
  • Some forms of AI-generated content seem to be harder to track and debunk than others. Synthetic audio content and video appear to be more difficult to identify as such for platforms. 
  • Manipulated audio tape ahead of the elections - spread a lot on Facebook during the election silence period, difficult to detect quickly, spread for hours before it was taken down. 
  • When citizens are close to the EU elections there is less trust in what the EU does in general (anti EU sentiment spikes). This creates a fertile ground for disinformation actors to spread false or sensational anti-EU narratives.  
  • Misinformation by chatbots, specifically their hallucination of crucial electoral information, is a major concern. 
  • Some participants pointed out that the fact-checking and CSO community should construe “disinformation” as false facts which may underpin a certain “narrative”, while avoiding labelling political opinions as disinformation narratives. 
  • Hungary is already a problem with EP elections – OSCE has already found previous elections to be undemocratic. Disinformation in the Hungarian EP elections is likely to be strong. 
  • Italy will have local municipal elections at the same time as the EP elections. Malicious actors could seek to spread disinformation about where and when to vote in hope to confuse voters.  

Foreign Interference 

  • Any potential interference in the European Parliament elections is unlikely to be a single, high-profile event, but rather a myriad of smaller foreign interference campaigns. In addition, it is often difficult to technically attribute a given actor as the source of disruption. 
  • One of the tactics expected from threat actors is cross-platform coordination, which is typically the default procedure for FIMI incidents. Existing measures, particularly within the DSA, do not adequately address this aspect. The DSA focuses primarily on platform-specific measures. The Code of Practice against Disinformation foresees cross-platform action, but there is a sense that there is not much for that yet. 
  • Researchers have also observed a shift from foreign actors towards domestic proxies, further complicating the task of attribution. In Slovakia, for example, there is a trend of pseudo-influencers spreading conspiracy theories, which challenges traditional methods of fighting disinformation in political discourse. 
  • Foreign interference remains a persistent concern on messaging apps, specifically pro-Russian Telegram channels which attract people who leave other platforms due to them being “too regulated”. 
  • In many country contexts, the days before election day are fragile, because they have “silence period” (usually 3 days before the election) in which there should be no electoral reporting, creating a deterrent for debunking or exposing. During this period, it may be helpful to create a platform where CSOs, media outlets, and authorities can collaborate and share information regarding FIMI or cases of domestic disinformation. 
  • To this end, CSOs should consider adopting consistent methods for data collection and standardized frameworks to facilitate information organization.  

Paid Political Ads (PPAs) 

  • Participants pointed out that the EU’s transparency and targeting of political advertising (TTPA) regulation is a step in the right direction – high expectation around issues with microtargeting, the use and misuse of data, and a lack of transparency. However, many of its more positive aspects will not be in place in time for the EP elections. 
  • In addition, these guidelines will apply only to VLOPS: not all malicious information will happen on VLOPs but on smaller platforms. We need to be realistic when considering the scope of these guidelines. 
  • The DSA is also relevant for PPAs because of the risk assessment mandated for platforms. That would influence mitigation efforts for PPA.  
  • Participants noted that Meta’s content moderation standards in Eastern Europe are low: the company doesn’t seem to respond to invitations from the Digital Services Coordinators or governments. When platforms say they take action, there is often little transparency. 
  • On platforms like TikTok, paying influencers to promote political parties often skirts platform policies against political ads. In certain contexts, such as Romania, far right parties are taking advantage of this, with some former politicians also becoming influencers.  
  • Participants also pointed out that often it is hard to pinpoint who is behind networks of political ads. Attribution is difficult.  
  • Demonetization of content – civil society should be the watchdog of this measure and platforms should be held accountable. 

This paper is part of the access://democracy project funded by the Mercator Foundation. Its contents do not necessarily represent the position of the Mercator Foundation. 

About Democracy Reporting International  

DRI is an independent organisation dedicated to promoting democracy worldwide. We believe that people are active participants in public life, not subjects of their governments. Our work centres on analysis, reporting and capacity-building. For this, we are guided by the democratic and human rights obligations enshrined in international law. Headquartered in Berlin, DRI has offices in Lebanon, Libya, Myanmar, Pakistan, Sri Lanka, Tunisia, and Ukraine 

Table Of Contents

This work is supported by

Stiftung Mercator