#TwitterFail

Twitter has failed at controlling horrifying anti-Semitism

Elena Scotti/FUSION

Of the 1,600 Twitter accounts that have sent out thousands of anti-Semitic tweets targeted at journalists over the past year, Twitter suspended just 21% of them. This number is among the many deeply disturbing details included in a new report out Wednesday from the Anti-Defamation League. A quick search of the hashtag #zionazi or #ovens is all one needs to confirm that senseless hatred is alive and well on Twitter.

Perhaps more than any other social network, Twitter has struggled to quell the masses of hatred and abuse that seem to thrive on its platform. As harassment and abuse has gone viral on the site, Twitter has moved steadily away from its early motto of being “the free speech wing of the free speech party.” It has banned revenge porn, issued new anti-harassment rules, established a trust and safety council and suspended high-profile users it considers abusive.

Yet still, between August 2015 and this past July, the ADL report reveals, users sent out more than 2.7 million blatantly anti-Semitic tweets.

An example of an anti-semitic tweet from the ADL report.

An example of an anti-semitic tweet from the ADL report.

If Twitter’s lack of action against them surprises you, it shouldn’t. Twitter suspended so few of these accounts for a two main reasons. First, for the most part, Twitter only looks into whether a tweet is a violation of its terms of service if another user flags it. This means that a tweet can easily make the rounds within the echo chambers of anti-Semitic white supremacy without ever attracting so much as an incredulous eyebrow raise from a Twitter administrator. Secondly, the social network has time and time again failed to accurately and consistently assess abuse when it does make its way into Twitter’s review process. Twitter’s abusive behavior policy is famously vague, but even when the violations of it are indisputable, the network often fails to act. Earlier this year, I experienced this first hand when Twitter failed to take action against several death threats against me that very clearly violated its harassment policies.

Oren Segal, the director of the ADL’s Center on Extremism, told me that among the people the organization spoke to, many reported an inconstant response from Twitter to anti-Semitic tweets directed at them.

“At the ADL, we recognize what anti-Semitism looks like,” said Segal. “But the people reviewing these tweets might not. Education is a big part of this.”

The ADL report looks most deeply at the impact of anti-Semitic harassment directed at journalists. A manual review of tweets containing anti-Semitic language found nearly 20,000 overtly anti-Semitic tweets targeted at 800 journalists. The majority of those were directed at just 10 journalists and came from a mere 1,600 accounts.

Some of the journalists the ADL interviewed decided to temporarily leave Twitter after becoming the target of anti-Semitic tweetstorms. Half of those interviewed decided not to report the harassing tweets at all, some citing a lack of confidence that Twitter would do anything to address the issue if they did.

“I think suspending or deleting [attacker’s] accounts is pointless, because they just come back on under a different name,” New York Times journalist Jonathan Weisman told the ADL. “Twitter has to decide if they are going to stand by their terms of service or not. If they decide tomorrow, ‘Look, we don’t have the capacity to monitor all of this, and we want it to be a free exchange of ideas,’–then fine, we would know what it was. But they want to have it both ways–the halo of having terms of service, but not enforcing them. Or enforcing them only sporadically.”

Weisman became the target of online racism after he tweeted out an article about the emergence of fascism in the United States and Donald Trump. After publicizing the hateful tweets directed at him last May, Andrew Anglin, of the white supremacist website The Daily Stormer, urged the internet to go after Weisman in a full-scale attack.

“You’ve all provoked us,” Anglin wrote of Weisman and another Jewish journalist, Julia Ioffe. “You’ve been doing it for decades—and centuries even—and we’ve finally had enough. Challenge has been accepted.”

Thanks to retweets, the overall 2.6 million tweets the report uncovered appeared on Twitter an estimated 10 billion times.

“That’s roughly the equivalent social media exposure advertisers could expect from a $20 million Super Bowl ad—a juggernaut of bigotry we believe reinforces and normalizes anti-Semitic language and tropes on a massive scale,” the report says.

The ADL report did not examine whether anti-Semitism on Twitter has had a chilling effect on journalism, but it did clearly show that if you are an outspoken Jewish journalist the cost of entry into Twitter’s marketplace of ideas is steep. And as the financially woe-begotten Twitter searches for a buyer, the network’s rampant abuse will no doubt be a concern heavy on any potential suitors mind.

“We definitely are hoping Twitter will come up with better strategies to address hate on their service,” Segal told me. “I don’t want to say they don’t do anything, but folks talked to for this report across the board said that Twitter has not responded to abuse quickly or consistently.”

Twitter disputed the accuracy of the ADL’s claims.

“We don’t believe these numbers are accurate, but we take the issue very seriously,” a spokesperson said in an emailed statement. “We have focused the past number of months specifically on this type of behavior and have policy and products aimed squarely at this to be shared in the coming weeks.”

Segal said that the fact that someone needs to flag abuse in order for it to get noticed by Twitter is troubling.

“There are memes and phrases of bigotry that occur consistently,” he said. “It’s possible for twitter to be more proactive.”

This idea is not without precedent. Recently, Google unveiled new software that uses machine learning to automatically flag language associated with harassment and abuse. Instagram’s new comment moderation policy allows users to determine what kinds of language is offensive individually and then blocks it from appearing. Even Twitter, in rare cases, has taken a proactive stance against blocking certain kinds of content on its network: in February, Twitter said that it had over time suspended 125,000 accounts “for threatening or promoting terrorist acts, primarily related to ISIS.”

Twitter has plenty of options to begin addressing its abuse problem it has at least thus far chosen not to implement. In the meantime, well over 1,000 known accounts continue to launch into the Twittersphere firebombs filled with anti-Semitic hate. Until Twitter does something, you can: the ADL is asking users to step in and report hateful tweets using the hashtag #exposethehate.