This article originally appeared on VICE US.
Facebook has spent the last two years trying to convince everyone that it can be trusted with people’s data. It failed and received a record $5 billion fine from the United States Federal Trade Commission.
So, naturally enough, Facebook now wants access to its users’ brainwaves.
Facebook announced an update Tuesday to its plans to build a headset that can monitor, read and translate your brainwaves and allow users to type just by thinking.
First announced two years ago, the project is being run by Facebook Reality Labs, which are working with scientists at the University of California San Francisco. The scientists ran an experiment using signals from electrodes that had already been placed on the brains of epilepsy patients.
By monitoring and recording the signals generated by a series of multiple-choice questions, the system was able to predict the correct answer with up to 76 percent accuracy.
While this is still a world away from Facebook’s stated goal of a "fully non-invasive, wearable device" that can process 100 words per minute, the advances made by the Facebook-funded scientists are still highly significant.
The scientists said the system could be utilized to help people who have lost the ability to speak to interact in a much more fluid manner.
But Facebook also said that in the near future, the new system could be used to help improve augmented and virtual reality headsets — such as those produced by the Facebook-owned Oculus — allowing users to complete actions such as “select” and “delete” using just their minds.
With augmented reality and brain-computer interfaces are seen as the next frontiers for privacy, this research has raised significant concerns about enabling unprecedented rights violations.
“Any technology that’s so intimately connected to our bodies needs to meet the highest ethical, privacy and security standards,” Frederike Kaltheuner, who heads up corporate exploitation work at Privacy International, told VICE News. “Gathering data from raw brain activity can be used to make inferences about thoughts and mental processes — whether they are conscious or not.”
Facebook has mired in a seemingly unending stream of privacy scandals for the last 18 months. In March 2018, the Cambridge Analytica scandal broke, revealing that Facebook allowed app developers to mine users’ data without their knowledge — data that was subsequently used by the Trump and Brexit campaigns. Despite repeated pledges from the company and its founder Mark Zuckerberg to do more to protect people’s privacy, it has continued to show its inability to do so.
“This could put people in great risk, and could be used to influence, manipulate and exploit them.” Kaltheuner said. “Who has access to this data? Is this data shared with third parties? People need to be in full control over their data. Unfortunately, that’s a distant reality in the many places that lack comprehensive privacy laws. We absolutely cannot trust companies to regulate themselves.”
For its part, Facebook did include an ethics section in its blog post. “Neuroethical design is one of our program’s key pillars — we want to be transparent about what we’re working on so that people can tell us their concerns about this technology,” Mark Chevillet, director of the brain-computer interface (BCI) research program at Facebook Reality Labs says.
Facebook has also pledged to have its research governed by an ethics board.
Facebook’s scientists say this technology is still in the very early stages of development but, in a blog post, the company says it still plans to demonstrate a prototype portable system by the end of the year, though it’s unclear what this will be capable of, or how it would measure the brain.
“To me, the brain is the one safe place for freedom of thought, of fantasies, and for dissent,” says Nita Farahany, a professor at Duke University who specializes in neuro-ethics, told MIT Technology Review. “We’re getting close to crossing the final frontier of privacy in the absence of any protection whatsoever.”