Tech

Facebook Testing Implications of Privacy-Invading Tech By Invading People’s Privacy

Project Aria will send scores of Facebook workers into the world to record everything around them.
On Wednesday, Facebook announced a new project: the company would send out a hundred employees and contractors equipped with glasses that would record every piece of audio, visual, and spatial information possible in public and private spaces.  During its
Image: Facebook

On Wednesday, Facebook announced a new project: the company would send out a hundred employees and contractors equipped with glasses that would record every piece of audio, visual, and spatial information possible in public and private spaces.

During its Facebook Connect livestream, the company dubbed this effort part of "Project Aria," a new attempt to research augmented reality and help Facebook understand potential ethical or privacy-related problems with AR and AR glasses. It will also have the incidental benefit of freely extracting and analyzing staggering amounts of data to ostensibly train algorithms powering this future project.

Advertisement

"We built Project Aria with privacy in mind and we've put provisions in place around where and how we'll collect data a well as how it will be processed, used, and stored,” Andrew Bosworth, Facebook's vice president of augmented and virtual reality, tweeted that day. 

In a promotional video for the project, this approach was described as "figuring out the right privacy and safety and policy model, long before we bring AR glasses to the world." Project Aria's promotional page insists that not only will data extraction in privately-owned places require consent, but that it will be securely uploaded from the devices to "a separate designated storage space, accessible only to researchers with approved access." Project Aria also promises harvested data won't "be used to inform the ads people see across Facebook's apps," at least during this pilot project.

Still, none of this answers why Facebook needs to begin harvesting untold amounts of data, let alone why it should be allowed to. Facebook has such a long history of privacy scandals that, as cliche as it may sound, this is akin to letting a serial arsonist decide how to fight fires they’ve started. 

Facebook’s lofty rhetoric about “mapping the world” and teaching devices to “better help humans in the future” sounds great, but it's also familiar. Years ago, there was a time when Facebook’s honeyed words about “connecting the world” were taken seriously along with its stated mission to “bring the world closer together.” 

Advertisement

On Monday, Buzzfeed News revealed an internal memo by a Facebook employee detailing how the company long ignored global political manipulation by national governments. The platform was used to incite genocide in Myanmar, has become a breeding ground for far-right disinformation campaigns and conspiracy theories, and saw an advertiser boycott emerge in part because its advertising monopoly insulated it from concerns about hate speech proliferating on its platform.

"New technology often has unintended consequences and negative externalities, and our job is to get ahead of ours," Bosworth said in the live stream. He's right, but let’s go a step further. When a company consistently finds a way to realize some of the worst consequences and negative externalities of seemingly every technology it develops, maybe the question for them is not how to do better next time, but how to stop

To his credit, Bosworth said Facebook would offer two external grants of $1 million dollars each to fund research into how "vulnerable communities" could be negatively affected by AR. The company is worth well over $700 billion with over $58 billion cash on hand as of June 30th this year, however.

For Facebook, what matters seems to be mapping the world digitally and positioning its devices and systems to interface with that map, learn from it, and help spawn successful products and services like AR devices. There’s no question for Facebook about whether AR should exist—or whether Facebook should be the one to advance its development and shape its ideas, designs, methods, and form.

And yet, if we vaguely understand the principle of “some technology shouldn’t exist” when negative outcomes persist for deeply embedded material or ideological reasons—like the racist and classist ways that facial recognition or predictive policing algorithms are deployed—then it’s not clear why we shouldn’t apply that logic to, say, companies like Facebook.