Digital democracy International
Share

5 things to know about our report on online public discourse in the MENA region

Through social media monitoring and focusing on key political events in Jordan, Lebanon, Tunisia, and Sudan, Democracy Reporting International (DRI) with its local partners examines trends of disinformation and hate speech to build evidence on how these phenomena impact electoral and other political processes.  

During a public online discussion gathering its partners, civil society actors and experts from the region, DRI launched its first monitoring report on 06 October 2022 to present its findings and to discuss the way forward to concretize its recommendations. 

The debate gathered more than 50 participants representing various organisations in the region. The audience included NGOs, INGOs, researchers, journalists, and experts from, Lebanon, Jordan, Libya, Egypt, Tunisia, and Sudan.  

The online debate introduced the report and its different sections. It started with presenting the work of local partners as follows: 

  • Jordan municipal elections, a case study of hate speech during the electoral silence, Shrouq Naimat, policy advisor at Al Hayat-Rased 
  • Lebanon general elections, disinformation streams during pre-campaigning period, Tony Mikhael, director of the monitoring unit at Maharat foundation 
  • Insights from Tunisia: changing context, changing streams, Dr. Mohamed Khalil Jelassi, institute of press and communication  
  • Understanding the political and social context of Sudan, Rogaia Eljilani, project officer at Sudanese Development initiative

The discussion was kick-started by presenting the lessons learnt resulting from this report, this session was led by: 

  • Wafaa Heikal, social media analyst at DRI 
  • Makram Dhifalli, data analyst at DRI 

And then, to give the report a broader perspective, Lena-Maria Böswald, Digital Democracy Programme Officer at DRI shared her insights on the global trends of disinformation and hate speech and what are the commonalities that we can draw from monitoring social media during important political processes.  

Marina Al Sahawneh, Technologist - Gender and Inclusion at the Jordan Open-Source Association (JOSA) , gave her feedback on the report from a gender perspective highlighting that gendered monitoring should be more thorough when analysing the impact of online gender-based violence.  

Elections and social media: Tactics to spread hate speech spreads

The case studies from Lebanon's parliamentary elections and Jordan's municipal elections showcase that hate speech and disinformation are intensified via multiple tactics.  

The messages retrieved from a sample of 522 tweets/posts monitored by Maharat Foundation during Lebanese elections focused on playing on voters’ emotions and sharing vague accusations. This was the case in over 80% of the materials monitored.  

Also, stirring up emotions was a tactic used by both traditional and emerging political actors. Only 3% of the discourses of emerging parties related to their electoral programmes. Other tactics used by partisans of both political groups were spreading rumours, although political candidates themselves did not attempt to spread rumours directly. There were 47 accounts which disseminated manipulated news, the majority of which were rumours. Manipulation campaigns were also operated by fake pages and accounts on both Facebook and Twitter. Fifteen accounts used in this campaign were created in December 2021 and became active on 11 January 2020. Through Botometer, all these accounts turned out to be bots. 

In Jordan, the work of Al Hayat-Rased focused on monitoring hate speech during electoral silence. The tactics in the sample of 51 media and news pages on Facebook consist mostly of spreading hate speech in the comments of 446 posts.  

Al Hayat extracted and classified 11,255 comments, among which 23,5% of them were hate speech. The highest form used in this sample was defamation defined as “The false attribution of an event or an incident in violation of the law and the well-established traditions in the country to a person, requiring punishment and general social contempt for that person. This attribution is public and deliberate and harms the reputation of the person or institution that it targets.”

Hate speech or freedom of expression. Where is the line?

Drawing the line between freedom of speech and hate speech is always complicated. There are some criteria identified to detect hate speech like the use of "any type of verbal, written or behavioural communication which attacks or uses a pejorative or discriminatory language by reference to a person or group of people on the basis of identity, religion, nationality, race, colour or origin1”. However, these items are not always easily operated due to several reasons:  

  • Lack of legal framework governing the right to freedom of expression with a clear definition of hate speech 
  • Context developments characterized by social tension and political polarization lead to new phenomena and new behaviours to spread online hate speech (i-e: Sudan and Tunisia). 
  • In most countries, the legal framework around cybercrime or freedom of speech accounts for restrictions on freedom of speech under the pretext of countering hate speech. These laws could cause a problem of misinterpretation, or even abuse of power against social media users (Tunisia). 

In the discussion around the key findings and recommendations of the report, DRI and partners found that setting a methodology for monitoring hate speech and disinformation is essential to know what methods, references, and tools a researcher or a civil society organisation would use. However, it’s advisable to keep the methodology adaptable and use practical guidelines to understand how to detect hate speech (Rabat action plan). For example, there is a proportionality test to examine if a restriction on freedom or right is well-founded (International Covenant on Civil and Political Rights - ICCPR) 

On a second level, it’s crucial to understand the degree of harm that hate speech can cause to its targets. The report included an analysis of hate speech comments using the following intensity scale 

Our partner SUDIA highlighted the importance of updating their lexicon and building a more thorough classification of hate speech.  

Offensive campaigns targeting women: Does it affect participation? 

All partners included gender and violence against women in their methodologies to examine instances of such violence in public life. The classification of violence against women included multiple forms of abuse. 

The report shed light on 4 examples of women on Twitter who are highly influential in the regional political conversation. Offensive content targeting women was generated by a few speakers and was amplified by retweets and replies. The offenders formed distinct communities on Twitter around the offensive content.  

Even in a limited scope such as this case study, the report exposes the malicious relationship between digital space and violence against women. Some users choose to direct sexualised insults and sexist slurs at women over contentious political issues instead of engaging in civil debates. 

That’s why further research is still needed to explain whether online violence against women in the four countries impacted the participation of women in the public sphere or not.

Social media platforms and hatespeech in the MENA region: Room for improvement

Derogatory speech online is often missed when it’s non-English. Reporting hateful comments or posts in the MENA region is more challenging.  

In fact, social media companies do not have tailored policies for non-English content. Moreover, research on social media requires tools that are not always accessible to Arab researchers. For instance, the use of Crowd Tangle is limited and needs more localization in the MENA context.  

The Arabic localization of the DRI Toolkit has proven to be a valuable resource to our partners in building a strong methodology and research approach in their projects. For example, Maharat, our Lebanese partner, used it to monitor hate speech in parliamentary election campaigns, and have also used other resources and classifications in other DRI guides in their work on gender-based violence. SUDIA, our Sudanese partner, adapted the toolkit to guide their methodology to monitor hate speech and disinformation during the transition phase after the 25 October military coup in Sudan. More resources in Arabic are needed to fill the gap between social media researchers around the globe.  

In relation to online Gender Based Violence (GBV), broader, contextualised research is still needed to explore hate speech in the Arabic Twitter-sphere and on other social media platforms, and to understand to what extent this phenomenon causes women in the region to self-censor themselves in the public space, or to withdraw completely from these platforms. Civil Society Organizations (CSOs) should also continue to engage Twitter in reviewing and acting on GBV to hold the social network responsible for their inaction. For its part, Twitter should invest more in detecting hate content in Arabic (as well as its various dialects and contexts) and address its West-centric bias.

What will happen next?  

Three more reports will follow this one. Monitoring will continue for the next period from April 2022 to September 2022 to examine other behaviours and streams. In each country the Words Matter project operates focusing more on manipulation patterns. One of the key processes that will be monitored is the referendum and parliamentary elections in Tunisia and the transitional period in Sudan after the coup of 21 October 2021. 

The findings of this report and the one to come will be used in the awareness-raising campaigns of partners to different target groups such as media students, academia, government and local CSOs in each country. DRI and partners will carry out activities like stakeholders’ roundtables, on-field campaigns and media literacy to explain how disinformation and hate speech are spreading and to highlight the ways of addressing their danger using the recommendations of the report.  

DRI and its partners will develop indicators to facilitate comparisons across countries. We aim to build cross-regional analysis using standardized tools, especially the unification of hate speech definition. 

Moreover, JOSA, who recently joined our Words Matter project, is building an artificial intelligence model called NUHA (which is a synonym word for “Mind” in Arabic). NUHA will detect and classify hate speech targeted against women on Twitter in the Jordanian public sphere. The results will be included in the next report to examine the patterns of hate speech. Moreover, NUHA is an open source which will allow researchers and journalists to use this technology to monitor gender hate speech.

View this page in: English Arabic