News

OK, So Facebook Is Bad. Now What?

Facebook, it has become increasingly clear, cannot be trusted to govern itself.
A seven-foot visual protest outside the US Capitol depicting Facebook CEO Mark Zuckerberg surfing on a wave of cash Thursday, Sept. 30, 2021, in Washington, DC.
A seven-foot visual protest outside the US Capitol depicting Facebook CEO Mark Zuckerberg surfing on a wave of cash Thursday, Sept. 30, 2021, in Washington, DC. (Eric Kayne/AP Images for SumofUS)

Evidence has been mounting for years that Facebook is harmful for its users. Countless reports have documented how the company’s algorithms push people toward radicalization, allow extremist movements to grow, and spread misinformation on an unprecedented scale. 

Advertisement

The platform has been used to incite genocide and mob violence in countries around the world. 

Meanwhile, its toxic effects on teenagers’ mental health and amplification of medical misinformation during the pandemic make the case that Facebook is a threat to public health. 

Facebook, it has become increasingly clear, can not be trusted to govern itself.

In recent weeks, reams of leaked internal documents have revealed that Facebook has long been aware of the harms it creates. Researchers and policy teams within the company have spent years warning of the negative effects of the platform and its algorithms, but the company either failed, or was just unable, to take action. 

Lawmakers have called it Facebook’s Big Tobacco moment, and it’s adding momentum to the long-standing effort to regulate Big Tech, one of the few major political issues that has both bipartisan support in Congress and approval from the American public. After many false starts, it now seems as though Facebook and other platforms may face meaningful regulation that would force them to change how they operate.

Advertisement

“A real sort of bipartisan outrage over the behavior of these big tech platforms has really crystallized, and that is a huge change,” said Chris Pedigo, senior vice president of government affairs at Digital Content Next, a trade organization for digital publishers.

Facebook desperately wants to avoid regulation, critics say, except under the relatively moderate terms that it publicly supports. The company is pouring millions into corporate lobbying efforts while fighting legal battles around the world against attempts to rein in its power. One day before whistleblower Frances Haugen testified that the platform was harming its users, Facebook filed a motion urging a U.S. District Court in D.C. to throw out a Federal Trade Commission lawsuit that claims the company violated antitrust laws.

Federal agencies such as the FTC are in the midst of a renewed push to regulate Facebook, while a growing number of data privacy laws in states such as California seek to limit the company’s ability to harvest the user data it relies on to sell targeted ads. Meanwhile, lawmakers have issued a range of proposals looking at ways to remove its legal protections for hosting harmful content and stop Facebook’s amplification of misinformation.

Advertisement

Targeting Facebook’s algorithms

Much of the recent debate over how to regulate Facebook has focused on the company’s algorithms—generally speaking, the way it determines what users see on the platform. One of the core issues with regulating Facebook is not just users posting hate speech or misinformation but the platform’s algorithms amplifying that content to the masses and creating financial or political gain for its creators.

“They spread misinformation at scale, which is a different problem than saying, ‘Oh, somebody is wrong on the internet’,” Joan Donovan, research director at the Harvard Kennedy School’s Shorenstein Center, said. “We have an entire industry of disinformers and media manipulators and medical profiteers that are incentivized to create this content and circulate it.”

Many lawmakers have suggested changes to Section 230 of the Communications Decency Act that would make Facebook legally liable for some of the content it hosts. The 1996 law shaped the course of the internet and gave platforms and online publishers broad immunity against lawsuits over third-party content. Facebook cannot be held legally liable for someone posting defamation, for instance, nor can Google face liability for users uploading videos to YouTube. These companies are treated as pipelines for content but not responsible for the content itself. 

Some Republican lawmakers, like Sen. Josh Hawley of Missouri, want to amend Section 230 to make it harder for tech companies to moderate content, claiming platforms have anti-conservative bias, despite evidence to the contrary. Meanwhile, Democrats such as Sens. Amy Klobuchar of Minnesota and Ed Markey of Massachusetts have proposed amendments where platforms would lose liability protection if they don’t take down or demote harmful content within a certain period of time.

Advertisement

A different approach, proposed by a former Facebook data scientist and other experts, is to hold them liable for content they spread algorithmically—or they could remove personalized algorithmic amplification entirely. Instead of seeing what Facebook decides to show you in your newsfeed based on the data they’ve collected, it would look more like a simple chronological list of posts. This approach would also likely result in platforms scrapping the vast majority of suggested content and news-feed rankings that promote sensational or controversial posts.

But while it would prevent Facebook from pushing users toward extremist views, it could also mean overcorrecting by taking down swaths of content that doesn’t violate its policies. It’s also highly likely that it would run into First Amendment challenges from the platforms, according to Daphne Keller, a constitutional lawyer and the director of Stanford University’s Program on Platform Regulation. Courts could rule that restricting the types of speech that platforms can show to users and their ability to rank posts up or down is essentially the same as censoring that speech altogether.

“If what we want is for Facebook to take down a bunch of First Amendment–protected but terrible, offensive, and damaging speech, Congress cannot use the law to make that happen by some top-down mandate,” Keller said.

It also assumes that the platform’s content-moderation technology is capable of accurately removing content. Leaked internal documents showed that Facebook’s automated systems can’t consistently detect content that violates its policies. In one example from 2019, Facebook admitted that its automated systems failed to detect the video of the Christchurch massacre, in which a white supremacist killed 51 people at two mosques in New Zealand.

Advertisement

“These proposals presuppose that some kind of technology exists that's able to quickly differentiate, with great confidence, between an innocent mistake and willfully malicious disinformation,” said Natalie Maréchal, senior policy and partnerships manager at the Ranking Digital Rights advocacy group. “That doesn’t exist.”

Taking aim at Facebook’s business model

Other attempts to rein in Facebook are targeting its data-collection practices and launching lawsuits against what regulators see as monopolistic business models.  

Under its new chair, Lina Khan, the Federal Trade Commission in August refiled an antitrust lawsuit against Facebook that claimed the company “resorted to an illegal buy-or-bury scheme to maintain its dominance.” Last month, DC Attorney General Karl Racine added Facebook founder Mark Zuckerberg to a privacy lawsuit that alleges Facebook misled consumers and allowed a third party to obtain sensitive data for tens of millions of users. 

In California and a number of other states, governments are also moving forward with data privacy laws that would restrict how Facebook collects user data. These data privacy laws would let users opt out or delete the data that Facebook gathers on them from their online activity, data which it also uses to suggest content users may like and potentially steer them toward harmful content. 

Advertisement

Legislation aimed at restricting what platforms can do with users’ data are especially important for Facebook, researchers say, because the company is foremost a business that tracks users’ activity, sucks up their data, and uses it to sell targeted ads. Critics say this ad revenue model gives the company an incentive to keep users on its platform through promoting high-engagement posts, which often means inflammatory and divisive content that fuels polarization and radicalization. Privacy laws that put restrictions on Facebook’s brand of surveillance capitalism could have the dual effect of limiting discriminatory ad practices and preventing misinformation from being promoted to users.

Pushing for transparency

Before any attempt to regulate Facebook, many experts say we need legislation forcing the company to disclose how its algorithms actually work. Sen. Ed Markey and Rep. Doris Matsui’s Algorithmic Justice and Online Platform Transparency Act calls for Facebook and other platforms to report to the FTC on how their algorithms operate.    

Facebook has continuously tried to evade outside efforts to show how their algorithms function. Earlier this year the company forced the shutdown of two research projects studying how its platforms promote content, one from New York University and the other from the German organization AlgorithmWatch. 

Both incidents highlighted what researchers say is a fundamental issue of any attempt to regulate the company: a stifling lack of transparency, where it appears even Facebook is unclear on how its algorithms work. While Haugen’s testimony last month and the leaked documents have offered a glimpse at how Facebook operates, researchers and lawmakers are calling for the company to finally let regulators take a look at the inner workings.

One telling Facebook experiment showed that when company researchers created a fake profile named “Carol Smith” and made it follow conservative accounts such as Fox News and Donald Trump, they found that within two days the algorithms suggested Carol join QAnon groups. The experiment was one of many tests in recent years that Facebook conducted to understand the full impact of its algorithms, and one that didn’t come to light until it was revealed in the leaked documents.

“We can't rely on whistleblowers. It's fantastic that they're there. It's fantastic that they have the courage to do something like this, but this is not enough,” said Matthias Spielkamp, the founder and executive director of AlgorithmWatch. “It’s not a regulatory model to just wait until some whistleblower tells us what is going on behind the scenes. We need more access.”