Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

How Facebook and Google are battling internet terrorism

Kenneth Corbin | Feb. 2, 2017
Leading platform providers are exploring new ways to actively engage in counter-messaging, building on robust systems to flag and remove extremist content.

There are also encouraging signs that ISIS is losing momentum in its own propaganda efforts. An October 2016 study by the Combating Terrorism Center at West Point found that at its peak in August 2015, ISIS was responsible for circulating more than 700 distinct pieces of communication in a month. One year later, following some significant setbacks on the physical battlefield, that number was down to fewer than 200 in a month.

Robust content filtering

Apart from their counter-messaging efforts, both Facebook and Google do have robust mechanisms for flagging and removing extremist content that violates their terms of use. In the case of Facebook, that entails a far-flung team that is trained to scan for extremist content that violates the site's terms -- a tricky exercise that requires screeners who can speak the local language and can distinguish when certain materials are being used for propaganda and incitement or for legitimate purposes.

"Context really matters when you're talking about terrorism content," Bickert said. "Somebody can use the ISIS flag, a photo of the ISIS flag, and it may be the BBC saying this is something that ISIS has just done and there's a still image from one of ISIS' videos or something like that. That doesn't violate our policies. People can definitely come and talk about events of the day, but if somebody is using that ISIS flag -- an image of it to say, 'I like this group' or 'We ought to join this group,' or they're showing clips of an ISIS video and not clearly condemning it, that is something that violates our policies and we would remove it."

At Google, Walden stressed that the overwhelming majority of users have no sinister motives for using the company's services, and described the company's so-called community policing system as an effective tool for removing extremist content and other objectionable materials.

"[W]e know that community policing works," she said. "Content comes down quickly, and when it doesn't we escalate those things and make sure that it does.


Previous Page  1  2 

Sign up for CIO Asia eNewsletters.