Twitter content moderation
Image: Cathryn Virginia
Tech

How Twitter Sees Itself

Multiple current and former Twitter employees, including executives, explain how Twitter really positions itself and its responsibilities around moderating speech.

In leaked audio published by The Verge last week, Facebook CEO Mark Zuckerberg said Twitter can't do as good a job at policing its network as his own company.

"I mean, they face, qualitatively, the same types of issues. But they can't put in the investment. Our investment on safety is bigger than the whole revenue of their company," Zuckerberg said in a July Facebook meeting.

Facebook may indeed be much bigger than Twitter, but budget is not the only reason the two companies' approaches to content moderation, such as policing hate speech and harassment, are drastically different.

Advertisement

Rather than act decisively by banning certain types of behavior and allowing others, Twitter's policy and engineering teams sometimes de-emphasize content and allow users to hide content that may be offensive but not explicitly against the platform's terms of service. In doing so, Twitter says it gives more freedom to users, while critics argue it places more burden on users and more trust in software solutions (or in some cases, band-aids) to police hateful or otherwise violating content on the site.

Twitter has not spoken at length about this approach before. Motherboard interviewed an array of current employees, executives, and former employees of Twitter about how the company tackles content moderation.

According to Vijaya Gadde, Twitter's head of trust and safety, legal and public policy, the emphasis on user control is due at least in part to Twitter not being able to enforce highly specific rules at the scale it operates. Twitter has an unflinching commitment to being a public space, where even highly offensive voices are allowed to be heard.

"I think there's a fundamental mission that we're serving, our purpose of the company, which is to serve the public conversation," Gadde told Motherboard in an interview. "And in order to really be able to do that, we need to permit as many people in the world as possible for engaging on a public platform, and it means that we need to be open to as many viewpoints as possible."

Advertisement

Although it is smaller than Facebook, Twitter still handles a tsunami of content and tweets everyday. Twitter says it sees itself as fundamentally different than Facebook.

"Twitter is really the only public platform in this space," Nick Pickles, global senior strategist for public policy at Twitter said. "We are in many ways quite different to our peers."

But belaboring this idea that Twitter is a public space may be misleading.

Do you work at Twitter or another social network? Did you used to? We'd love to hear from you. Using a non-work phone or computer, you can contact Joseph Cox securely on Signal on +44 20 8133 5190, Wickr on josephcox, OTR chat on jfcox@jabber.ccc.de, or email joseph.cox@vice.com.

"The comparison obscures the extremely active roles that the platforms themselves have in shaping and generating conversation," Becca Lewis, who researches networks of far right influencers on social media for the nonprofit Data & Society, said. Public spaces "don’t have algorithms that surface certain content; they don’t have an advertising-based business model meant to drive engagement; they don’t generate billions of dollars of revenue. What we’re actually experiencing is a media and communication environment that we don’t have any apt comparisons for, since we have never experienced anything like this in the past."

Twitter's commitment to being a public space, and how it tries to juggle that simultaneously with being a healthy venue for conversation free of harassment or other violations, manifested in it studying how white supremacists use the platform, and whether leaving their accounts online may help de-radicalization efforts. In May, Motherboard reported that Twitter was researching the topic, while other platforms have already banned accounts related to the ideology. Facebook banned white nationalism and separatism in March, calling them ideologically the same as white supremacy.

Advertisement

"From a broader kind of philosophical point of view, I think there is something there that says that if we don't have these conversations out in the open, and have these issues, and these ideologies out in the open, actually how do you counter them?" Pickles said. "How do you have those conversations that drive progress?"

That commitment is also demonstrated in how the network places ordinary users' potentially offensive replies to tweets behind a barrier that needs to be clicked instead of deleting them; tweaks what tweets may be available in search results; or is now letting users hide the replies to their tweets—the content is still there, existing in the public space, you can just choose not to see it.

"There are times when we could simply disappear something. We don't do that," Gadde said. "We downgrade things and we put them behind interstitials and we're very clear when we've done that, and the reason for that, is because our platform is meant to be transparent. We need people to trust that it operates in a certain way."

This choose-your-own adventure of content moderation is unlike what Facebook and Google's YouTube do on their platforms. While there are some instances in which specific types of content are hidden behind content warnings, which tell users that they may be about to view something offensive or graphic, both of those platforms rely on content moderation teams of many thousands of people to tag the types of content that should be deleted or put behind these so-called "interstitial warnings."

Advertisement

This is not to say that Twitter doesn't ban anything. Its rules prohibit violence, terrorism, and several other content categories, and its moderators are supposed to delete this content and ban users. According to Twitter's most recent transparency report, roughly 10.8 million accounts were reported between July and December 2018; it took action against 612,563 of those accounts.

Multiple current and one former employee described Twitter's human content moderators as "agents." Donald Hicks, VP of Twitter Service, confirmed that Twitter uses a so-called "hybrid" model, where it uses in-house employees as well as external contractors to moderate content. A content moderation source said that Cognizant Technologies, a well-known contractor, has a contract with Twitter for this work; Twitter confirmed this to Motherboard.

"We need to permit as many people in the world as possible for engaging on a public platform, and it means that we need to be open to as many viewpoints as possible."

Twitter has content moderation centers spread across the globe, with facilities in Toronto, Tokyo, San Francisco, Manila, Budapest, Warsaw, Dublin, and Bangalore, Twitter said. In all, its enforcement team is made up of about 1,500 people, the company added. Facebook's own somewhat equivalent group is 15,000 strong (Twitter's monthly active users are a fraction of Facebook's, at 320 million compared to over 2 billion.)

Advertisement

Twitter also places a greater emphasis on automatic and software-based solutions. Twitter said 40 percent of content that needs to be acted upon is now surfaced to teams automatically, so users don't have to report that material themselves.

David Gasca, who leads the company's product efforts to build a "healthier service," said, "moderation is a blunt force tool, and so with the product we really want to focus more on just than keeping it up or taking it down." A lot of the time, Gasca said, some content may be considered abusive but isn't against the site's terms of service.

The natural response to that may be—then why not change the rules to make the abusive behaviour fall under the site's policies? Gasca said that what may be seen as abusive in, say, Japan, may not be the same as in the United States. On top of that, Gadde said Twitter believes changing the rules doesn't change how people interact with the site.

"One of the mistakes we made in the early days was thinking that we could change the rules and change the behavior, and what we found as we put more aggressive rules in place, that it really wasn't having that much of an impact," Gadde said.

"Realistically what we've found that has had much more effect is if we work really closely with teams, like Product and Engineering teams, for future research, to really think about how to address these problems holistically, and not think of these as just one team has to go figure out the answer to this," she said. This is where various teams will work on developing the strategies that go beyond simply banning or not.

Advertisement

"The growth team tended to have priority over everything, because those monthly active users were so crucial."

It's worth mentioning that, on other social networks that have banned specific types of offensive content or specific communities, researchers have found that those communities do not reconstitute themselves in meaningful numbers elsewhere on the site. For example, after Reddit banned a series of racist and misogynistic subreddits, Georgia Tech researchers found that users affected by the ban reduced their overall hate speech and found that many users simply left the platform altogether.

Twitter's approach of leaving content up, albeit perhaps not immediately visible to a user, instead of removing it is still open to plenty of criticism.

"Twitter’s responses, even those that move beyond a binary approach, show how they are actually playing an active role in the type of content that appears and surfaces on their platform," Lewis from Data & Society said. "And hiding content instead of removing it can lead to unintended consequences. Among other issues, it can generate a conspiratorial mindset among content creators who feel that their content is being suppressed but cannot always prove it. In short, it shows a lack of transparency that breeds distrust on the platform while still failing to grapple with the root issues at work."

In its early days, the trust and safety team was made up of just around five or six people, one former employee said. Those people were not only responsible for designing and implementing the site's policies, but enforcing them as well with a basic email system, they added.

Advertisement

"I guess the team was able to handle the tickets, but we purposefully made it difficult to file them, to report tweets," the former employee said. "That was very intentional; to make it difficult to do, so there'd be less incoming tickets to handle, therefore less employees needed." A second former employee characterized the decision differently. Instead of making it intentionally difficult, the motivation was closer to: why improve the reporting mechanism if the team knew it would increase the time it takes to reply to a ticket?

The trust and safety team previously batted heads with the part of Twitter tasked with making the platform bigger.

"The growth team tended to have priority over everything, because those monthly active users were so crucial," one of the former employees said. "If we suspend more people, we lose people, which the growth team doesn't like, and there is a lot of that."

"If we make it harder to make an account to weed out some bots, the growth team doesn't like that," they added.

Twitter denied the company purposefully made it harder for people to report tweets, and instead said it focused on making it easier to report tweets that legitimately required attention, while cutting down on the number of malicious reports, such as those from trolls abusing the reporting system.

One of the former employees, who criticized the company in other ways, nevertheless stressed that Twitter is better at fighting abuse now than it has ever been.

"While we're still growing, we still have growing pains, we've come a long way in the last 24 months," Hicks said.

Update: This piece has been updated to include a more specific phrasing of David Gasca's work.

Subscribe to our cybersecurity podcast, CYBER.