If navigating the internet feels increasingly confusing for you -as it does for us- we have bad news: It might get worse. The tools, tactics, and narratives used to spread online disinformation are getting an upgrade powered by new technology and more sophisticated strategies. In the hands of malign actors, these capabilities could usher in a new chapter of online disinformation, muddying our public discourse even more and tilting opinions towards extremist views.
If we want to understand the future of online disinformation, we need to start analysing emerging trends and technologies. This is the aim of our DisinfoRadar project, which investigates the roots of new disinformation practices to inform experts and policymakers of the disinformation toolkit of tomorrow.
Our Rapid Response Briefs identify the disinformation tools, tactics and narratives that are evolving at this very moment, and bring them to the attention of experts and policymakers. By clearly identifying the threats and offering potential solutions to countering disinformation practices, our briefs offer a glimpse into the worrying trends that are shaping the disinformation space of the next decade.
Rapid Response Brief 5—Is AI Undermining Trust Online? ChatGPT, Large Language Models, and Disinformation
ChatGPT, a natural language generation model created by OpenAI, has caused quite a stir on social media. This text generator, trained to follow instructions from a text prompt and provide answers in natural language as if one were communicating with a human, has been used more than 1 million times in just five days. Users have turned to this new chatbot to ask a wide range of questions, from ideas for Christmas menus to solutions to the climate crisis.
How does this work? ChatGPT is a machine learning model trained by feeding it large amounts of online text. This helps the system learn which meaning is associated with certain questions or demands and provides human-like answers to users’ questions in a fraction of a second.
However, what happens when ChatGPT mixes false and correct information or fabricates disinformation? How will users distinguish between a text produced by a human and an advanced language model? The risk is that the misinformation generated by advanced language models is convincing enough to go viral if the answers provided are taken at face value.
Our fifth Rapid Response Brief illustrates how ChatGPT can change the speed and richness of online disinformation and provides recommendations that service providers can follow to ensure that their innovations are not stifled but protected from malicious use.
Rapid Response Brief 4—Worth More than 1,000 words: The Disinformation Potential of Text-to-Video Generators
First images, now video. In 2022, innovation in AI-powered image creation has reached a new level. Meta has taken another step forward with the launch of the world's first available video-text generator, Make-A-Video. Shortly after, Google released Imagen Video, its own video-generating tool.
While text-to-video models on the market are not yet capable of generating hyper-realistic video sequences with text prompts, they have demonstrated that AI-powered synthetic content is developing fast. In previous reports, we analysed OpenAI's DALL-E 2 and Stability AI's Stable Diffusion. Technology has now moved into video.
Text image generators improve the quality of their results with each new model release. This Rapid Response Brief explains why we should expect the same from synthetic videos and how they have the potential to flood the Internet with visual disinformation videos and memes.
Rapid Response Brief 3—WhatsApp's Paradox Reality: Disinformation Tactics during the 2022 Brazilian Elections
Do you trust news you receive on WhatsApp? A recent poll from the Reuters Institute revealed that 53% of Brazilians do.
Since 2018 Brazil has seen the development of an ecosystem where an avalanche of false narratives confuses the population and impacts institutions. The recent 2022 Brazilian elections were once more a breeding ground for sophisticated disinformation practices.
During the campaign ahead of the elections and between rounds, YouTube and Whatsapp made up a perfect combo for spreading disinformation. By sharing out-of-context video snippets via WhatsApp, users and malign actors managed to bring false narratives around the election directly into the hands of Brazilians.
Our third Rapid Response Brief outlines in more detail this and other tactics used during the Brazilian elections. We also list out some recommendations to counter these tactics.
Rapid Response Brief 2—Pay to Pray: The Privacy Pitfalls of Faith-based Mobile Apps
COVID-19 accelerated the digitalisation of religious communities, spurred by what venture capitalist Katherine Boyle has called a “holy trinity”: “isolated people hungry for attachment, religions desperate for growth in an online world, and technology investors searching for the consumer niches yet to digitise.” The pandemic was an opportunity for those who saw religious apps as a business model.
Faith-based mobile apps, which allowed church-goers to practice their faith in isolation while restrictions were in place, are now part of a digital religious revival, attracting users en masse. Contrary to what the app founders claim, investigations have revealed that many of these apps harvest sensitive information and sell it to untransparent third-party vendors. For example, users' data have been sold to Facebook or government counterterrorism agencies.
This is not purely a data privacy matter, as the leaks can result in “surveillance targeting,” where disinformation is packaged into social media advertising. Find out our recommended responses by reading the full brief.
Rapid Response Brief 1—Stable Diffusion, open-access image generation and disinformation
Democracy Reporting International just published a report warning about text-to-image generation technology and the potential it holds for disinformation. As technology evolves, fake but realistic images generated with text prompts will scale up the credibility of false narratives, Malign actors will surely use this technology to spread disinformation.
This Rapid Response Brief examines the significant potential that Stable Diffusion, an open-source text-to-image generation tool, holds for disinformation. The fact that this is an open-source tool makes it different to others. While proprietary tools, such as DALL-E 2, restrict access via output filters, open-source technology is freely available to anyone. Stable Diffusion, for example, does not prohibit the generation of images with racist or sexually explicit content and allows for the defamation of public figures.
For more information on this threat and the potential responses, check our brief below.