This article originally appeared on VICE US
Facebook announced Thursday that the company killed more than 2 billion fake accounts in the first quarter, a huge total even by Facebook’s standards.
The Silicon Valley giant trumpeted the numbers in its Community Standards Enforcement report, which claims the platform is policing itself more proactively. The company also claimed that automated tools are catching more harmful content before users see it.
The deletions came largely from spammers attempting to flood Facebook with a large volume of content, according to the report. While most of those fake accounts were removed within minutes, 5% of all monthly Facebook users are still bogus.
The progress report also catalogued other problematic areas Facebook attacked in the first quarter. Its tools and moderators took down 33.6 million pieces of content with violence and graphic material and 6.4 million comprising terrorist propaganda, according to the report.
“It’s impossible to build a single system that’s going to work everywhere and catch everything,” Facebook CEO Mark Zuckerberg said on a conference call with reporters Thursday. “The system isn’t perfect. Whenever you’re drawing a line around what content is acceptable, there will always be people who thought you got it wrong.”
The scale of the problems on a platform with 2.38 billion monthly active users is staggering. The 2.19 billion fake accounts Facebook disabled in the first quarter of this year marked a sharp increase from 1.2 billion accounts zapped in the last three months of 2018.
Facebook has stepped up its policing measures only after fierce public outcry. But the way it enforces its policies remains largely nebulous. Facebook officials said Thursday that the company is policing more hate speech than ever, but its automated tools have a harder time picking it up than other categories of prohibited posts.
“That gives a sense of the progress but also how much there is left to do,” Zuckerberg said, renewing calls for government regulation around speech and safety.
Earlier this month, the company banned Infowars conspiracy theorist Alex Jones and several other right-wing personalities, which snipped a key lifeline for the incendiary figures. Facebook officials said at the time that Jones violated its policy against dangerous individuals and organizations, but they did not specify what Jones did that crossed the line after years of conspiracy-mongering and anti-Semitic innuendo.
“It’d be much more helpful if Facebook would be much more clear of the principles they’re applying,” said Paul Barrett, a New York University law professor who’s studied domestic disinformation campaigns on social media. “Their methods — whatever they are — of analysis and decision-making are surprisingly opaque.”
Even Thursday’s numbers had a question mark hanging over them. Facebook enlisted an outside advisory group at The Justice Collaboratory at Yale Law School to monitor the company’s methods for measuring and enforcing its policies. But that group’s corresponding report Thursday described an oversight process with no access to engineers who work on content enforcement on a daily basis.
Some researchers also worry that Facebook’s current strategic pivot, pushing users to interact more in private groups, could make the company’s decisions about harmful content more difficult to parse.
Zuckerberg acknowledged those fears on the conference call Thursday. He added that the company is consulting with law enforcement agencies and governments to help monitor its shift toward more encrypted communications, including how it could affect the sale of weapons, drugs, and other illicit materials.
“We recognize it’s going to be harder to find all the different types of harmful content,” Zuckerberg said.
Criticism of the company’s enforcement efforts have fueled calls to break up the company, which also owns Instagram and WhatsApp. Democratic Sen. Elizabeth Warren has argued for the breakup of Silicon Valley giants to improve competition. Facebook co-founder Chris Hughes similarly called for Facebook to be split in a New York Times op-ed, citing its immense power over global communications.
Zuckerberg rebutted those arguments on Thursday. After downplaying Facebook’s dominance of the digital advertising market, Zuckerberg argued that a split of the company would effectively hinder attempts to clean up its platforms.
“In one decade, the success of this company has allowed us to fund these efforts at a massive level,” Zuckerberg said, referencing the $3.7 billion he vowed to spend on security efforts this year. “I believe the amount of our budget that goes to safety is greater than Twitter’s revenue this year.”
Cover image: Facebook CEO Mark Zuckerberg arrives to meet with French president Emmanuel Macron at the Elysee Palace, in Paris, Friday, May 10, 2019. (AP Photo/Francois Mori)
This article originally appeared on VICE US.