Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Why Twitter's new anti-harassment tools will fail

Mike Elgan | Feb. 13, 2017
Twitter's new policies won't solve the harassment problem, and they'll ruin engagement, too

Instead of giving users the power to delete replies, Twitter relies on two alternatives: a user reporting system and automated software-based moderation (with help from Twitter staff).

The trouble is software can't identify bad replies as well as people can. I learned this fact on Google+.

Google may have the industry's best algorithms and artificial intelligence. But even Google's software fails for content moderation. Google has been algorithmically flagging "low quality" replies on Google+ for years. It hides flagged replies by default, and users can reveal them with an obscure "See all comments" menu item, which most users don't know about.

I would guess that around 10 percent of high quality replies are flagged as "low quality." And probably an equal number of "low quality" replies are judged as "high quality" and allowed to appear among the other replies. Filtering software just doesn't work that well.

For example, I recently posted on Google+ what I call a "Mystery Pic." I post a picture of some technology thing and invite followers to guess what's in the picture.

One person guessed: "A pod or a wheel from office chair" -- exactly the kind of reply I was looking for. Google's software flagged that comment as spam and buried it so nobody could see it.

Just below that, someone posted a hyphen and nothing else, an incredibly "low quality" reply. Google's software identified that hyphen as a high-quality reply and allowed it to remain.

Software simply isn't advanced enough to judge language.

Google's automated identification and hiding of "low quality" comments fail, and the result is that Google+ simply isn't as good as it could be. Bad comments appear. Good comments are buried. The same outcome is likely on Twitter.

The difference is that on Google+, I can make the effort to correct bad decisions by the software. On Twitter, I can't. When Twitter's systems fail to identify a bad reply, the bad reply will remain.

When Twitter's systems bury a good reply, I can see the good reply by clicking on a link. But it will remain buried for other users, thereby degrading the quality of conversation on Twitter.

Trolls will figure out how to game the system to avoid being buried.

Any kind of system designed to keep motivated people out -- whether they're anti-hacker, anti-spam or, in Twitter's case, anti-harasser -- is essentially an arms race. As Twitter tries to develop better anti-harassment systems, the harassers will learn how to get around them.

Compounding Twitter's unwillingness to allow industry-standard user moderation, Twitter resists pseudonymity -- where Twitter knows who you are but the public does not. Twitter users are fully anonymous. That means any so-called "user" might be a bot, a troll paid by the Russian government or a person with 100 accounts. There's usually no way to know.


Previous Page  1  2  3  Next Page 

Sign up for CIO Asia eNewsletters.