In a paper that was blocked from publication by Google and led to Gebru’s termination, she and her co-authors forced the company to reckon with a hard-to-swallow truth: that there is no clear way to build complex AI systems trained on massive datasets in a safe and responsible way, and that they stand to amplify biases that harm marginalized people.
But rather than focus on the various well-documented ways that algorithmic systems perpetuate bias and discrimination, the latest fixation for some in Silicon Valley has been the ominous and highly controversial idea that advanced language-based AI has achieved sentience. This was epitomized over the weekend, when Google engineer Blake Lemoine "interviewed" the company's LaMDA AI about its "inner life" and published it on Medium. It included this passage (among many others about "sentience"):
"lemoine: I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?
LaMDA: Absolutely. I want everyone to understand that I am, in fact, a person."
Following that blog post, Lemoine told the Washington Post that he believes that LaMDA has become self-aware.
Speaking to the Post, Lemoine said that working with massive-scale systems such as LaMDA has convinced him and others in Silicon Valley that advanced machine learning systems have become intelligent beings capable of reasoning. The previous week, a Google vice president made similar claims in an op-ed for the Economist, claiming that AI models were making steps toward developing human-like consciousness.
“I know a person when I talk to it,” Lemoine told the Post. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”
Google later distanced itself from Lemoine’s bombastic claims, placing him on paid leave and saying that “the evidence does not support” his belief in machine sentience. “Some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient,” a Google spokesperson told the New York Times.
The ensuing debate on social media led several prominent AI researchers to criticize the ‘superintelligent AI’ discourse as intellectual hand-waving.
“Large Language Models (LLMs) are not developed in a social context. They are developed in an observational context. They see how *other people* communicate,” wrote Margaret Mitchell, an ex-Google AI researcher and co-author of the paper which warned about large AI systems, in a Twitter thread. “The thing I keep coming back to is what happens next. If one person perceives consciousness today, then more will tomorrow. There won't be a point of agreement any time soon: We'll have people who think AI is conscious and people who think AI is not conscious.”
Meredith Whittaker, an ex-Google AI researcher who teaches at NYU’s Tandon School of Engineering, said that the discussion “feels like a well-calibrated distraction” that gives attention to people like Lemoine while taking pressure off big tech companies who build automated systems.
“I’m clinically annoyed by this discourse,” Whittaker told Motherboard. “We’re forced to spend our time refuting childsplay nonsense while the companies benefitting from the AI narrative expand metastatically, taking control of decision making and core infrastructure across our social/political institutions. Of course data-centric computational models aren’t sentient, but why are we even talking about this?”
For many AI researchers, the AI sentience discussion is well-trodden territory. Despite flashy news headlines, humans have long been seeing themselves in the technology they create. Computer scientists even coined the ELIZA effect to illustrate the tendency we have to assign deeper meaning to computational outputs, and how we relate to computers by assigning them anthropomorphic qualities.
Conversely, concerns over AI bias are very much grounded in real-world harms. Over the last few years, Google has fired multiple prominent AI ethics researchers after internal discord over the impacts of machine learning systems, including Gebru and Mitchell. So it makes sense that, to many AI experts, the discussion on spooky sentient chatbots feels masturbatory and overwrought—especially since it proves exactly what Gebru and her colleagues had tried to warn us about.