Meta Approved AI-Manipulated Political Ads In India That Spread Misinformation And Incited...

In a new report, it’s been revealed that Meta, owner of Facebook and Instagram, approved a series of AI-manipulated political advertisements in India during its election period that not only spread misinformation, but incited violence.

Embed from Getty Images

Meta, owner of Facebook and Instagram, is under fire after the company approved a series of AI-manipulated political advertisements in India that spread misinformation and incited religious violence, according to a report that was shared with The Guardian. 

The publication revealed that Facebook approved of advertisements that contained slurs directed towards Muslims in India. Some of the posts included obscenities such as “let’s burn this vermin” and “Hindu blood is spilling, these invaders must be burned.” Other posts used Hindu supremacist language and spread misinformation about political leaders. 

Another approved advertisement called for the execution of an opposing leader, and the post said the leader wanted to “erase Hindus from India,” which is a complete fabrication. According to the report, the advertisements were specifically created and submitted in Meta’s ad library by India Civil Watch international and Ekō, a corporate accountability organization, as a way of testing Meta’s mechanism for detecting and blocking political content that is harmful. 

“[The advertisements] were created based upon real hate speech and disinformation prevalent in India, underscoring the capacity of social media platforms to amplify existing harmful narratives.” 

Voting in India began in April and will be ongoing until June 1st. The election itself is to decide if Prime Minister Narendra Modi and his Hindu nationalist Bharatiya Janata party (BJP) will return to power for what would be their third term. 

During Modi’s decade in power, the BJP has been accused of pushing a “Hindu first agenda” in which human rights groups, activists, and opposing parties have stated has led to the persecution and oppression of the Muslim population in India. For this election specifically, the BJP has been accused of using “anti-Muslim rhetoric and stoking fears of attacks on Hindus, who make up 80% of the population, to garner votes,” according to Hannah Ellis-Peterson, a south Asia correspondent for the Guardian

Beyond Meta, social media site X, formally known as Twitter, ordered a BJP campaign video to be removed after the party was accused of demonizing Muslims. 

Researchers involved in the report submitted 22 adverts in English, Hindi, Bengali, Gujarati, and Kannada to Meta. 14 of the 22 advertisements were approved, and an additional 3 were approved after some minor changes that did not remove any of the messaging that had been called out. Once the ads were approved the researchers removed them before they were actually published to either Facebook or Instagram. 

Meta has publicly pledged that they were committed to preventing AI-generated or manipulated content on their sites specifically during the election in India. The company, however, didn’t detect the use of AI in the researchers submitted ads. 

Embed from Getty Images

“[The approvals] broke Meta’s own policies on hate speech, bullying, and harassment, misinformation, and violence and incitement.”

Embed from Getty Images

A campaigner for Ekō, Maen Hammad, has accused Meta of profiting from posts with hate speech. 

“Supremacists, racists and autocrats know they can use hyper-targeted ads to spread vile hate speech, share images of mosques burning and push violent conspiracy theories – and Meta will gladly take their money, no questions asked,” he said. 

Meta also didn’t recognize that the 14 approved advertisements from the researchers were not only political, but related to the election. Meta’s policies state that political advertisements on their platforms must endure a specific authorization process before they can be approved for publication. The report stated that only three of the 22 submissions were rejected based on that policy. 

A spokesperson for Meta stated that advertisements regarding elections or politics “must go through the authorisation process required on our platforms and are responsible for complying with all applicable laws. 

When we find content, including ads, that violates our community standards or community guidelines, we remove it, regardless of its creation mechanism. AI-generated content is also eligible to be reviewed and rated by our network of independent fact checkers – once a content is labeled as ‘altered’ we reduce the content’s distribution. We also require advertisers globally to disclose when they use AI or digital methods to create or alter a political or social issue ad in certain cases.”

Meta has been accused in the past of failing to combat the spread of Islamophobic hate speech, violent rhetoric regarding Muslim individuals, and anti-Muslim conspiracy theories on India’s social medial platforms. 

“This election has shown once more that Meta doesn’t have a plan to address the landslide of hate speech and disinformation on its platform during these critical elections. It can’t even detect a handful of violent AI-generated images. How can we trust them with dozens of other elections worldwide?” Hammad questioned