It’s no secret that terrorist organisations have been leveraging social media not only as an important tool to plan and coordinate with members, but also as a potent tool for recruitment as well as propaganda. Facebook has been working in the background to tackle these serious security threats operating within its domain and has claimed to have removed 3 million posts purportedly related to terrorism.
Specifically targeting Islamic terrorist organisations ISIS and al-Qaeda, the official post titled What Are We Doing to Stay Ahead of Terrorists? The post explains how the social media behemoth employed machine learning to mass delete millions of posts linked to terrorism.
Employing Machine Learning to Thwart Terrorists
This statement was delivered by Facebook through a press release drafted by the company’s Head of Counterterrorism Policy and Global Head of Policy Management. The company decided to utilise machine learning to counter clever tricks employed by terrorists to evade Facebook’s existing monitoring and moderation techniques. The decision was trigged after the US Department of Justice report on how an alleged ISIS supporter had devised workarounds such as compromising legitimate accounts, abandoning and creating new accounts, developing code languages to avoid detection, and splitting messages and directives into multiple posts.
The new machine learning tool devised by Facebook aims to intelligently detect such terrorist workarounds and catch these advanced diversionary patterns that may otherwise escape the existing anti-terrorism moderation methods employed by Facebook to weed out such content. While it may prima facie seem that the machine learning tool automatically deletes posts, in reality the tool looks for patterns suggesting terrorist activity and assigns a core to such content. These scores indicate how likely a post might be linked to terrorism and help what seem to be human content reviewers to identify such content easily.
Since the sheer amount of Facebook posts generated daily makes it impossible for human vetting, Facebook has employed machine learning to look for known patterns and automatically flag such posts. This makes it easier for the human content reviewers to then manually go through the flagged posts instead of taking up the impossible task of vetting every single post.
Facebook Implies Tool Has Beeen Deployed in Q3 2018
Interestingly, the post insinuates that Facebook was in possession of this automated post removal tool for much longer, but the process was waiting on reworking the existing appeals process to incorporate provisions to make appeals on improperly removed terrorist content. This is important since the process of automated removal, even if it is based on sophisticated machine learning systems, is definitely open to mistakes and a robust process of appeal is imperative to allow individuals to express themselves freely.
“We are constantly working to balance aggressive policy enforcement with protections for users. And we see real gains as a result of this work: for example, prioritization powered by our new machine learning tools have been critical to reducing the amount of time terrorist content reported by our users stays on the platform from 43 hours in Q1 2018 to 18 hours in Q3 2018,” the official statement read.
Facebook tacitly implies that the new machine learning based anti-terrorism tool has been deployed since the third quarter of this year. The statement boasts of an almost 60 percent decrease in the time for which such content stays online on the website. This implies that the tool still requires user intervention by the means of users reporting terrorist content. The reported posts are then passed through the machine learning assisted tool to flag and remove content pending human verification.