Digital democracy International

From Threats to Solutions: Mobilising Experts to #DisruptDisinfo

In times of ChatGPT, Midjourney, our Disinfo Radar project addresses head-on the impact of these AI tools on public discourse online. By analysing emerging technologies and the strategies used to spread disinformation on the web – and how to tackle them – the project seeks to help democratic societies anticipate future challenges to public discourse. 

Yet not one organisation can tackle these issues alone. That is why our rotating working group format aims to mobilise stakeholders that are actively involved in designing solutions to disinformation threats.  

Building on the success of last year’s #Disrupt Disinfo working group, we will gather disinformation, AI, hate speech, and security experts from various fields, academia, and government to disrupt the disinformation space of the future. The knowledge and outcomes generated from these sessions will set the tone for our upcoming second version of DisinfoCon, taking place in Berlin on 12 September 2023.

The Latest on Our #DisruptDisInfo Working Groups:

A rotating set of experts will discuss new disinformation threats and detection mechanisms, exchange recommendations and solutions, and share insights on countering disinformation and the impact of new technologies on democratic decision-making from a regional angle. Each working group will give room for participants to present ongoing projects, aiming to enhance the quality, context, and visibility of the work conducted by participating organisations. 

Fourth Working Group: Convergence and Compliance

Our fourth working session on fighting disinformation took place on the 11th of December and brought together a variety of esteemed experts:  

Experts presented their organisations’ approaches to researching disinformation and discussed their various methods for collecting data during this session. The speakers also discussed the DSA’s effect on the scope and access of fact-checking organisations, as well as the challenge of finding funding to expand their research. A particular focus was on the difficulties of building cases for systemic infringements of the DSA, and how civil society organisations dedicated to combatting disinformation could cooperate in such efforts.    

Third Working Group: Data Access and Scraping

Our third working session on fighting disinformation took off on the 30th of October and brought together the following experts::  

During the session, the experts discussed the ethical and legal challenges of accessing and scraping data both now and after the implementation of the EU’s Digital Services Act (DSA). Relevant topics included what researchers should define as “public” data, the different possible interpretations of DSA statutes, and the future legal risks researchers could face for data scraping. A common theme throughout was the continued importance of civil society organisations in both investigating issues and monitoring enforcement in a post-DSA online sphere.   

Second Working Group: Sharing Expertise on AI-Generated Disinformation and its Detection 

Our second working session on the detection of AI-generated disinformation took place on 18 August with esteemed experts from different fields of expertise:  

The subject of this working group’s intervention was manual and automatic tools to detect synthetic content that are useful for the CSO community – and how to differentiate it from authentic material. During the open discussion, the experts exchanged feedback on the robustness of a hands-on user guide that Democracy Reporting International is to publish in October, shared practical experiences from their own work, and connected with experts in the field. 

First Working Group: Harnessing AI for Detecting Disinformation and Combatting Hate Speech

Our first working session on AI solutions for detecting disinformation narratives and hate speech kicked off on 24 May with esteemed experts from different regions of the world: 

During the session under Chatham House rules, the experts presented their unique approaches to  

  • detecting gendered hate speech in Jordan (JOSA); 
  • spotting toxic constructs by leveraging AI-powered analysis of the information space (Let’s Data); 
  • and tracking narratives to improve the detection of coordinated inauthentic campaigns on social media (Patrick Warren).  

The participants worked together on potential solutions to these challenges to further develop their tools and techniques for identifying and combating toxic discourse more effectively. Lastly, Democracy Reporting International shared our most recent interventions for our very own Disinfo Radar Threat Registry – a classifier that aims to identify new disinformation technologies at an early stage by way of automated text analysis. 

This work is supported by