But with around two months until the polls open, Pakistani voters still have no idea how political parties or other groups are using Facebook to target them with ads. Facebook has said that it will bolster transparency by requiring advertisers to confirm their identity, set up a political ads archive, and invest in artificial intelligence to stop bad actors from misusing the platform. But the responsibility is not Facebook’s alone. The Election Commission of Pakistan, political parties, and citizen groups can play an active role in detecting and deterring the misuse of social media during Pakistan’s elections.
Around 44 million Pakistanis, roughly about a quarter of the total population, use social media. Through social media mining in “R”, we have already identified over 50 Facebook pages that are related to politics and whose total fans reach up to 30 million people. Our analysis also shows that the major political parties are increasingly using Facebook as the elections approach, but little attention has been paid to how these social media are used by parties and citizens, and what can be done to mitigate the potential risks. We want to discuss four issues about social media that require attention for the integrity of Pakistan’s elections: user privacy, bots and so-called trolls, fake news, and campaign financing.
Privacy is on many minds in the fallout of the Cambridge Analytica scandal. The researcher associated with Cambridge Analytica had direct access to 270,000 Facebook profiles, but he harvested some 87 million more profiles using a Facebook setting that granted access to the data of people who were friends with these 270,000 people. While Facebook has fixed this and other issues, some vulnerabilities may still exist.
If Pakistan’s political parties plan to use social media for campaigning, they will need to respect the privacy of voters, and ensure that the media companies they hire have not acquired data illegally. Pakistan has a thriving black market where people can buy millions of mobile phone numbers, and Facebook’s Custom Audiences feature can be used to match Facebook profiles to these numbers, making voters unaware targets for political parties and ads. While Facebook’s terms require advertisers to have consent from people whose data is being matched this way, Facebook does not ensure that this is actually the case.
Social “bots” present another challenge for the elections. Put simply, bots are accounts which have been programmed to automatically generate activity — in the case of Twitter, to retweet, like, or follow certain accounts, amplifying their messages. Bots can be quite benign, and they are often used to send weather reports or updates about natural disasters. But at their most malicious, bots can be used to spread misinformation and abuse candidates or political parties. During the US elections, Twitter identified more than 500 accounts which tried to convince English and Spanish supporters of Hillary Clinton to vote online or via text (which was impossible), hoping to suppress voter turnout. Bots are also used to like and retweet other accounts, signaling popularity for certain policy proposals or statements made by political parties or candidates.
Bots are in abundance on Twitter, and many political parties or candidates may unknowingly have bots among their followers. In our analysis, we’ve found that almost half of the Twitter accounts currently tweeting about two of Pakistan’s major political parties have a high likelihood of being bots.
Based on a sample of accounts, we found that approximately 52 per cent of #PMLN tweeters and 46 per cent of #PTI tweeters were likely bots. Pakistani voters should therefore view posts in social media with a good dose of skepticism and ensure that the posts or tweets they share come from credible sources. In other reporting, social media managers have also confirmed that Pakistan’s political parties are already using fake accounts in social media to influence the narrative and their party image.
“Trolls” are a related concern for the elections. Trolls are real people, hired to influence debates online, and they often engage in personal attacks, harass people, and encourage polarisation. They are like people hurling shoes or throwing ink at candidates during public gatherings and they can have a toxic effect on public debate. During the US elections, trolls from the Russian Internet Research Agency successfully infiltrated controversial public debates and contributed to polarisation on both political sides, fomenting division among voters online.
In Pakistan, women are the main victims of online harassment and women candidates — and candidates from minority religious groups — could be particularly vulnerable to harassment by trolls during the elections. Recently, a Facebook post portrayed Imran Khan as a Hindu deity, sparking public condemnation, with members of the National Assembly claiming that an “online campaign” was targeting Hindus. Such a post may be an offense under Pakistan’s Prevention of Electronic Cyber Crimes Act (PECA) — since it could potentially “advance inter-faith hatred” — but it does not clearly violate Facebook’s own Community Standards on hate speech, meaning that the company might only act in when requested to do so by the government. By then, of course, a lot of damage could have already taken place. The Election Commission and other federal authorities should act quickly to identify problematic posts in social media, and citizen groups can also report posts or abusive trolling to Facebook or Twitter. However, in a context where the PECA has also been misused to go after critical journalists, it will also be important to respect diversity and freedom of expression, especially during the elections.
Information disorder, popularly called “fake news,” is also likely to present a challenge during the elections. Many Pakistanis have read the news about Nawaz Sharif allegedly hiring Cambridge Analytica. The news, first published in Eurasia Future, has since been retracted for false reporting, but not before causing a furore in Pakistan. People can engage in intentional disinformation and create fabricated stories and twist existing information to harm candidates, groups, or political parties. Disinformation should not be confused with factual inaccuracies, or misinformation in reporting, since sometimes false information can be reported where there was no intent to cause harm.
Finally, the effective use of social media requires resources. Political parties need money to develop social media advertisements and target audiences online. Regulation is needed by the Election Commission to require political parties — and other third-party supporters — to disclose their spending on social media. Currently, the use of social media is not explicitly regulated under the Elections Act 2017 and the Commission’s Election Rules 2017. The Election Act 2017 does, however, allow the Election Commission to constitute a campaign monitoring team, which could examine political ads online or monitor any violations of the Act or Rules.
Overall, the Election Commission needs to bring all electoral stakeholders together to develop a Code of Conduct that accounts for the ethical use of social media in the 2018 elections. With or without a Code of Conduct, parties should act responsibly and avoid using any manipulative or unethical tactics online. Citizen groups can actively monitor the use of social media during the elections and highlight any instances of incitement to violence or misinformation to the relevant authorities.