FYI.

This story is over 5 years old.

Tech

Internal Documents Show Facebook Has Never Deserved Our Trust or Our Data

Internal emails between Mark Zuckerberg and Sheryl Sandberg lay bare the fact that Facebook has long sought to leverage user data to turn a profit.
GettyImages-1041178470

The question Mark Zuckerberg has been asked ad nauseum—after each and every new scandal that has affected Facebook—is “why should users trust Facebook with their data?” It’s been asked by members of Congress, rhetorically by journalists in dozens of thinkpieces, and directly to Zuckerberg in most recent press calls. Zuckerberg has been forced to contritely answer this question so many times that many have said he’s gone on an “apology tour.”

Advertisement

Zuckerberg recently called the Cambridge Analytica scandal a “major trust issue” and, as the scandals have kept coming—a Facebook hack, content moderation practices that allow Holocaust denial and white nationalism, a PR campaign against its detractors including George Soros—Facebook has continued to pitch itself as a flawed but well-intentioned company that is learning as it goes and never imagined any of the harm that has been caused by its product.

But new internal Facebook documents that are part of a lawsuit filed by a firm called Six4Three and published Wednesday by a member of the United Kingdom’s Parliament shows once and for all that Facebook knew the potential harms of its product all along and pushed forth anyway. The answer to the question “why should we trust Facebook?” is: We shouldn’t, and we never should have.

The documents show that Facebook’s highest leadership including Mark Zuckerberg and Sheryl Sandberg purposefully designed a product intended to get users to share as much data as possible about themselves, and then worked to leverage that data to work with as many advertisers as possible using a tactic known as “reciprocity.” This is a tactic that was pitched by Zuckerberg in 2012 and signed off on by Sandberg in emails sent between top Facebook executives.

The model, the emails and documents say, was predicated on forcing developers that used Facebook to allow users of third-party apps to share their information with Facebook itself. Facebook would then use that data to make better profiles of users, which could then be used to sell targeted ads back to advertisers. Facebook also made the specific decision to allow third-party apps to have access to users’ friend lists and other sensitive information.

Advertisement

“Full reciprocity means that apps are required to give any user who connects to FB a prominent option to share all of their social content within that service back to Facebook,” Zuckerberg wrote in an email to executives explaining Platform 3.0, a Facebook update which implemented reciprocity.

“Sometimes the best way to enable people to share something is to have a developer build a special purpose app or network for that type of content and to make that app social by having Facebook plug into it,” Zuckerberg wrote. “However, that may be good for the world but it’s not good for us unless people also share back to Facebook and that content increases the value of our network. So ultimately, I think the purpose of platform—even the read side—is to increase sharing back into Facebook.”

“I like full reciprocity,” Sandberg said in a response to that email.

Facebook argued in a statement published Wednesday that the documents are cherry-picked and that they show only “one side” of the story. It reiterated that the company changed many of these policies in 2015, that many of them were “public” to begin with, and that it does not “sell” user data. But Facebook is arguing semantics here. The company rose to prominence, power, and influence behind these policies, which were predicated on addicting people to Facebook, monetizing their data, then pulling back only once its dominance had been secured and/or it had been called out in the press.

Advertisement

"I’m generally sceptical that there is as much data leak strategic risk as you think"

Facebook is not notably "evil" in these emails and documents, but they nonetheless show that the company is motivated largely by power and profit, like most companies. The documents show that Facebook is not bumbling, inept, or unimaginative about how their product could be used or how the public would respond to its decisions. The documents show that on many things that users and media have pushed back against the company knew all along what it was doing, or had at least considered the possible ramifications of its decisions and then implemented them anyway.

"I am not proud of the fact that we are currently extolling ‘game’ companies that make online slot machines as a positive example of those willing to pay our fees (I am fine with it, just not proud of it.)"

When Sam Lessin, then VP of product management, raised concerns to Zuckerberg that allowing third-party companies to see friend data could be a privacy or hacking risk (which was ultimately at the heart of the Cambridge Analytica scandal), Zuckerberg wrote back: “I’m generally sceptical that there is as much data leak strategic risk as you think. I agree there is clear risk on the advertiser side, but I haven’t figured out how that connects to the rest of the platform. I think we leak info to developers, but I just can’t think of any instances where that data has leaked from developer to developer and caused a real issue for us. Do you have examples of this?”

Advertisement

In an email to Zuckerberg and other executives, Lessin also specifically acknowledged that, in the early days, the only companies that were willing to pay for access to Facebook’s tools were sketchy game companies (the app that harvested the data used by Cambridge Analytica to target voters was a quiz game app.)

“What we really have is a set of games made by people who see a financial opportunity to hack our system for free attention,” Lessin wrote. “I am not proud of the fact that we are currently extolling ‘game’ companies that make online slot machines as a positive example of those willing to pay our fees (I am fine with it, just not proud of it.)”

Time and time again, Facebook has acted in the press as though it somehow stumbled onto a product that has swallowed the internet, decimated industries, and scooped up and shared large swathes of consumer data. But the emails and documents show that Facebook’s design was always to collect as much data as possible in a way that favored Facebook the most.

Decisions were made in ways that would make sure that partners, developers, advertisers, and users would provide the most “value” to Facebook, either in the way of data, money, or handcuffing them to Facebook’s platform and ecosystem, the emails show. Executives regularly talked about how valuable Facebook’s distribution network is to developers, and how Facebook could leverage that distribution in order to get more data and information.

Advertisement

“If we were strategically OK with not giving [access to Facebook’s platform] away for free, then I think many more developers actually would accept a rev share to enable their users to connect with FB and share back to us,” one executive wrote. “Counterintuitively, once devs were paying for this, they’d be more invested in getting the most out of the integrations so they’d likely invest more and actually push even more info into FB.”

"This is a pretty high-risk thing to do from a PR perspective but it appears that the growth team will charge ahead and do it"

When Facebook’s actions came at the expense of competitors—great; Facebook sought to destroy them anyway by specifically restricting their access to the platform. When those actions affected came at a privacy risk of users or to the detriment of partners, then Facebook sought to spin the reasons for decisions or hide them altogether. For example, in an email from February 2015, several Facebook employees discussed the introduction of a “read call log” permission into the company’s Android app. This was to help Facebook with features such as identifying new people users may know and recommended them as Facebook friends, the email adds.

Facebook was fully aware of the risks of the controversial step.

"This is a pretty high-risk thing to do from a PR perspective but it appears that the growth team will charge ahead and do it,” one section of an internal email, and attributed to Facebook Product Manager Michael LeBeau, reads.

Advertisement

The email also discussed pushing the update but without presenting users with a new permission requested panel, essentially not informing users of the change.

Talking about another permissions update, this time around Bluetooth, LeBeau again demonstrated how aware Facebook is of its own perception around increasingly invasive techniques, even risking bad PR at the benefit of growth.

"Screenshot of the scary Android permissions screen becomes a meme (as it has in the past), propagates around the web, it gets press attention, and enterprising journalists dig into what exactly the new update is requesting, then write stories about 'Facebook uses new Android update to pry your private life in ever more terrifying ways—reading your call logs, tracking you in businesses with beacons, etc,’” he wrote according to the email.

"But we're still in a precarious position of scaling without freaking people out,” he added. Facebook implemented it anyway.

The emails do not show a full history of Facebook, but they do show that the company is like its worst critics have always said: Once it formed a business model, it sought to maximize profits, hook users onto its platform, get them to share as much information about themselves as possible, and leverage its power to push around partners and crush competition. These emails show that becoming an all-powerful everything machine was always the plan, and a very calculated one at that.

“As we've said many times, the documents Six4Three gathered for their baseless case are only part of the story and are presented in a way that is very misleading without additional context,” Facebook told Motherboard in a statement. “We stand by the platform changes we made in 2015 to stop a person from sharing their friends' data with developers. Like any business, we had many of internal conversations about the various ways we could build a sustainable business model for our platform. But the facts are clear: we've never sold people’s data.”

But do you trust it?