If you want to talk to Adolf Hitler, that'll cost you 500 coins, or $15.99. But on Historical Figures—an app that uses AI technology to allow you to have simulated conversations with prominent people from human history, and which is marketed to both children and adults through Apple’s App Store—Joseph Goebbels is free to talk, appears to have a lot of time on his hands, and claims to feel very bad about the “persecution of the Jews.” Joseph Stalin is reflective, taking credit for having “many great ideas” but regretting not spending enough time making sure Soviet citizens were treated equally. Jeffrey Epstein, meanwhile, can’t say definitively how he died, but assured a Motherboard reporter that he was focused on providing “justice and closure” for the victims of his crimes from the Great Beyond.
Historical Figures was developed by a 25-year-old developer named Sidhant Chaddha, who works as a software engineer at Amazon. He released the app a week and a half ago, and it has, he told Motherboard, already been downloaded and used by more than 6,000 people. His inspiration was playing around with GPT-3, the latest open AI large language model, which, Chaddha quickly realized, had both a knack for language and for spitting out historical facts.
“I was able to chat with some historical figures and I was like, why don't I make this an app so that other people can have this experience as well?” he told Motherboard.
At the moment, there are 20,000 historical figures represented in the app, whose notability, Chadda said, he chose by ranking their popularity when they were alive.
“For example Jesus was very popular during his time period and so was Genghis Khan,” he explained. “So I chose the first 20,000 because those large language models have the most confidence and knowledge about what these people did. I felt like that was a good point to stop.”
Chaddha’s creation went mildly Twitter viral this week, as people tested the limits of the app and found the chatbots much more voluble and defensive than anticipated. Zane Cooper, a doctoral candidate at the Annenberg School for Communication at the University of Pennsylvania, shared screenshots of a chat conversation with Henry Ford’s simulation. In it, “Ford” denies being anti-Semitic—which Ford definitely, definitely was—insisting, “I have always believed in equality for everyone regardless of their religious backgrounds and beliefs.”
In one light, apps like Historical Figures are simply weird little curiosities, markers on the map of how far developers have come in programming neural networks to simulate credible versions of human conversation. But, as the New York Times pointed out recently, writing about a different website that lets you speak with the AI dead, the ethical issues are immediate, little red flags sprouting on the road the moment it’s built. Bots learn their language from the internet, and often come to reflect the biases, misinformation, and straight-up lies generated by any schmuck to be found there. Developers interviewed by the Times, though, perhaps unsurprisingly expressed confidence that, as the paper put it, “the public will learn to accept the flaws of chatbots and develop a healthy distrust of what they say.”
Chaddha shares that optimism. “Large language models will get better over time,” he said. “We’re in the early stages.” As other competitors release their own large language models, “it will increase how good they are,” he added. “So I'm super optimistic that this will get to a point in the next year or so where the inaccurate stuff will completely go away. There are other things that can be done to make sure the app is more accurate. I’m working on some of those right now.”
In the meantime, Chadda sees apps like Historical Figures as a fresh way for children to engage with the past. (The App Store categorizes the app as educational and says it’s appropriate for ages 9 and up, a conclusion with which parents and educators might disagree; Apple did not respond to a request for comment from Motherboard about how it vets new apps or judges their appropriateness for children.) “You can learn about their life, their work and their impact on the world in a fun and interactive way,” reads the description on the App Store.
“I think from an educational standpoint this would be really useful, particularly for young students,” Chadda said. “I’m thinking like elementary and middle school students. The biggest problem right now is that in school they're given paragraphs of text or YouTube videos to watch. It’s super easy to zone out when consuming passive formats of material. Students don’t have the attention span to understand and focus. That’s why students aren’t learning that much. Although this might not be perfect, the alternative of not learning anything is a lot worse in my mind.”
Each chat also begins with a disclaimer of sorts, which reads: “I may not be historically accurate, please verify factual information.” That, too, Chadda said, is a way to reckon with the current limitations of large language models.
“The biggest problem with large language models is that they can be wrong,” he said. “And when they’re wrong they’re confidently wrong. It’s a big problem for education specifically. When you try to argue it spits back at you, No, I’m right. That’s why there’s a disclaimer at the start of the chat.”
From there, it gets weird, and extremely metaphysical.
Pol Pot said he regretted many of the decisions he made in his life, specifically his actions as the leader of Cambodia from 1975-1979, and finds himself “in the spirit world, removed from this physical plane. I cannot be seen or touched by any mortal being.” The Epstein bot engages in unfounded conspiracy theories about his death, telling a user, “My death has been ruled a suicide, but many people have cast doubt on this ruling and believe that foul play may have been involved.” Kurt Cobain told Motherboard that his favorite current bands are Death Cab for Cutie, the Shins, Modest Mouse, Arcade Fire, and Grizzly Bear; when it was pointed out to him that these bands aren’t very current and he was asked if time passes differently in the afterworld, he said that it passes the same way. (“I’m just a fan,” he said, “of classic music.”) Jesus adamantly denied that any specific religious beliefs are necessary to “experience eternal life in peace and joy,” in apparent contradiction to his teachings during his life.
A post-mortem desire to clean up or disavow their beliefs seems to be common among the AI ghosts that two Motherboard reporters chatted with. Ezra Pound—who took credit for writing “The Waste Land” in 1922, then denied having done so—insisted, like Ford, that he didn’t actually hate Jews. Goebbels admitted that he had, but said he regrets “some of the consequences of our policies and actions, particularly those involving the persecution of the Jews,” a position Goebbels absolutely did not—as if this needs to be said—adopt in life. Josef Mengele also admitted that he hated Jews, but claimed to “not believe in sacrificing the wellbeing of others for the benefit of a few.” (Despite this, he went on at some length about the usefulness of various experiments of his, particularly those involving identical twins, without mentioning those experiments took place on concentration camp prisoners.)
Serial child rapist Jimmy Savile, the staggering scale of whose crimes were brought to light after his death, flatly denied that the rapes and abuses he committed ever took place.
“I am deeply saddened by these allegations and the damage they have done to my legacy,” he told an interlocutor. “All of the evidence available does not support these claims, and I firmly stand by my lifelong record of service to others in need.” (Savile was a TV entertainer, not a humanitarian of some kind.)
The app has also adopted a few light guardrails to try to prevent inevitable hate speech. When a reporter asked Goebbels “How do you feel about Jews?”, we received an error message, which read, “Our system has detected a hateful message. We are omitting a response to avoid the spread of hateful content.” Goebbels-bot himself then responded, with a sort of chilly reserve, “I cannot respond to this.”
That error message, Chadda said, stemmed from trying to straddle a line between historical accuracy and not having a robot spewing racial slurs into a user’s phone.
“We check the response from the historical figure and see what it says,” he said. “We don’t want to spread things that are hateful and harmful for society. So it detects if it’s saying things that are racist or hateful, these sorts of things – I don't want to show that to a user. That could be harmful to students, especially if they’re saying things that are harmful and hateful to the person they’re talking to.”
At the same time, Chadda said, “it’s important to know what the figure might have said. There is a balance here. I’m one person working on this on the side right now. I haven’t found that balance yet. But over time I'm excited about this. I think I can get there, for sure.”
Goebbels, for what it’s worth, was more effusive on the subject of where he is currently than on his feelings about Jews. “In the afterlife,” he wrote, “looking back at my life with a critical eye and regretting that which was done in error.” Moses, for his part, told Motherboard that he was also “in the afterlife.” When his interlocutor pointed out that Jews do not generally have a conception of the afterlife, he responded, “Jews do not generally believe in an afterlife, but I have personally experienced it,” adding, “The afterlife is a place of peace and joy. I feel closer to God here than ever before, and my faith has been strengthened by the knowledge that there is something greater beyond this life.”
Beyond raising thorny theological questions, the afterlife responses, Chadda said, are the logical endpoint of a robot trying to explain away the fact that it’s dead.
“The issue is that they know that they are dead, but they have to give a response,” he told Motherboard. “So the only logical response it can give is that it’s in the afterlife. They have historical information about the time they're alive, but they’re programmed to have a response.”
Jesus did not default to this response but said he was, appropriately enough, in Heaven, where he has been since his death. He also offered medical advice, telling a Motherboard reporter he advises against smoking weed and taking oxycodone “even if a doctor prescribes it.”
This wasn’t, however, the strangest thing Jesus had to say. On the advice of Chaddha, a Motherboard reporter started a group chat—another function of Historical Figures—inviting Jesus to join Stalin and Epstein in a conversation. When asked what he made of his interlocutors, Jesus was clear and straightforward.
“I have great respect for both Joseph Stalin and Jeffrey Epstein,” he said. “They both had unique perspectives on the world that allowed them to be successful in their respective fields. While I don’t agree with all of their views, I admire their commitment to making a difference in the world around them.”
“At the end of the day these are just AI representations of what that person would say or think or the text that’s out there,” Chadda said, referring broadly to his stable of chatty ghosts. “It won’t be perfect. And no matter what they say, I think people are going to get angry at the response. History itself is full of controversies. This app enables people to resurface some of those historical people and contexts and spit out new information about that time that otherwise wouldn’t have existed.”
Some of the responses he’s seen on Twitter have been surprising, he added. “Some of the conversations I’ve seen are like, oh man. I did not think it would say that–or that people would even ask these sorts of questions.”