Social networks for their part claim they are doing their best to weed out terrorist content though it is turning out to be like trying to whack-a-mole, with the proscribed content or new content resurfacing elsewhere.
YouTube has a strong track record of taking swift action against terrorist content, said a Google spokesman, who said the company would not comment on pending litigation. “We have clear policies prohibiting terrorist recruitment and content intending to incite violence and quickly remove videos violating these policies when flagged by our users. We also terminate accounts run by terrorist organizations or those that repeatedly violate our policies," he wrote in an email.
A Facebook spokeswoman wrote that “there is no place for terrorists or content that promotes or supports terrorism on Facebook, and we work aggressively to remove such content as soon as we become aware of it." A Twitter spokesman said “violent threats and the promotion of terrorism deserve no place on Twitter and, like other social networks, our rules make that clear.”
In a post on combating violent extremism in February, Twitter said that as noted by many experts and other companies, “there is no ‘magic algorithm’ for identifying terrorist content on the internet, so global online platforms are forced to make challenging judgment calls based on very limited information and guidance.”
Gonzalez is asking the court for compensatory damages to be decided in a trial. The lawsuit is likely to add to other pressures the social networking companies are already facing on the terror issue from various quarters including Congress. Senators Dianne Feinstein, a Democrat from California, and Richard Burr, a Republican from North Carolina, for example, proposed legislation in December that would require tech companies to report online terrorist activity to law enforcement.
Sign up for CIO Asia eNewsletters.