WhatsApp Banned Over 16 Lakh Indian Users from Platform in April due to Harmful Behaviour

WhatsApp also said that it is steadily taking a more proactive approach towards curating its platform of spammers and abusive users.


WhatsApp banned over 16 lakh users in India in the month of April this year, according to the company’s monthly disclosure and transparency report filed on Wednesday, June 1. According to company data, over 16.6 lakh users from India were banned from it as part of Meta’s efforts to prevent “harmful activity” on the app. A further 122 accounts were banned from the platform on the basis of user complaints, the disclosure report also added.

WhatsApp User Bans: How it Works

The WhatsApp transparency report is published under enforcement of the Ministry of Electronics and Information Technology, basis India’s Information Technology (Intermediary Liability and Digital Media Ethics Code) Rules, 2021. Under the latest set of regulations, any social platform with more than 5 million – or 50 lakh – users in the country will have to compulsorily publish transparency reports detailing what accounts did it ban, and actions that it took basis user complaints as well as government requests.

In a statement detailing how the app restricts abusive user behaviour on the platform by using technology, the Meta-owned messaging service said, “Our goal is to identify and stop abusive accounts as quickly as possible, which is why identifying these accounts manually is not realistic. Instead, we have advanced machine learning systems that take action to ban accounts, 24 hours a day, 7 days a week.”

WhatsApp’s parent organisation, Meta (formerly Facebook), has faced considerable criticism for not having taken proactive approaches to reducing aspects such as hate speech, abusive user behaviour and propaganda on all of its platforms, which include Facebook, Messenger, Instagram and WhatsApp.

The company thus claimed in its latest report that it has looked to restrict abusive user behaviour on its platform by using machine learning, and scanning user reports against accounts that have been marked as spam or abusive on the platform.

“We are particularly focused on prevention because we believe it is much better to stop harmful activity from happening in the first place than to detect it after harm has occurred,” a company statement further added.