Twitter's Transparency Report Isn't Transparent If It Doesn't Address Harassment
Hiding abuse doesn't make it go away.
Twitter released its biannual transparency report today, and as usual, it encompasses a monumental amount of data on things like information requests, copyright notices, and email privacy. Once again, it shows that Twitter is probably the social media network most willing to defend its users' civil liberties. But one topic that's perpetually absent from the company's self-reporting is harassment, a problem Twitter has yet to forthrightly discuss.
Unlike its proactive approach to national security issues—during the second half of 2016, for example, 376,890 accounts were suspended for promoting terrorism, of which 74 percent were flagged by internal anti-spam tools—Twitter has historically required a hefty dose of encouragement before improving its anti-harassment tools.
When literal Nazis began organizing on the platform last year, it took months for Twitter CEO Jack Dorsey to admit that Twitter's wounds had gone septic. A bandaid feature was rolled out soon after that, which disallowed users to see which potentially abusive lists they'd been added to, but was promptly rolled back after people noted that hiding harassment doesn't make it go away.
Twitter wouldn't comment on why harassment wasn't included in its most recent transparency report. I asked if the company had future plans to release this type of data, but received no comment. This isn't surprising for us at Motherboard:
For a company that's challenged the federal government before, Twitter seems afraid to confront its own users. Questions about harassment numbers are almost never answered on the record. As a result, we're still hopelessly in the dark about what happens to abuse reports; who inside Twitter sees them; how they're prioritized; and what, exactly, counts as harassment (Twitter points everyone to its guidelines for this, but high-profile users have received preferential treatment against abuse that may not qualify as abuse when flagged by regular users).
What we do know about harassment comes from outside reports, like the one by Women, Action and Media, or from surveys like BuzzFeed News' profile of 2,700 users. These aren't the most scientific of endeavors, which no one is denying, but they're all that exists in lieu of Twitter's silence. And the evidence reveals an imperfect system. Twitter's methods for fighting harassment are overly-reliant on users, often too laborious, and can be impersonal—29 percent of people in BuzzFeed News' survey said they received no response from Twitter after reporting abuse.
This doesn't mean the company has nothing to show for their efforts so far. Despite a rough couple of years, Twitter has launched some helpful anti-harassment tools. Last year, we gained the ability to mute notifications, keywords, phrases, and conversations. And earlier this year, new features decreased the visibility of abusive content, and provided extra layers of safety customization.
All that's missing now is feedback on how effectively these tools are working. Some questions worth answering might be: What percentage of accounts suspended for harassment were picked up by Twitter's own filters? Which countries ranked highest for abuse? What types of behavior most frequently resulted in suspension? How often did Twitter take action based on abuse reports?
Twitter makes it clear, however, that its transparency reports are limited to privacy and national security trends, such as terrorism-affiliated activity. This comes at a time when white supremacists, arguably America's most visible and legitimized terrorist group, are using Twitter to target people and spread propaganda. That sure sounds like a national security issue to me.