Tech

AI Inventing Its Own Culture, Passing It On to Humans, Sociologists Find

Algorithms could increasingly influence human culture, even though we don't have a good understanding of how they interact with us or each other.
“Human Culture to Increasingly Come From Unexplainable AI, Sociologists Find”

A new study shows that humans can learn new things from artificial intelligence systems and pass them to other humans, in ways that could potentially influence wider human culture.

The study, published on Monday by a group of researchers at the Center for Human and Machines at the Max Planck Institute for Human Development, suggests that while humans can learn from algorithms how to better solve certain problems, human biases prevented performance improvements from lasting as long as expected. Humans tended to prefer solutions from other humans over those proposed by algorithms, because they were more intuitive, or were less costly upfront—even if they paid off more, later.

Advertisement

"Digital technology already influences the processes of social transmission among people by providing new and faster means of communication and imitation," the researchers write in the study. "Going one step further, we argue that rather than a mere means of cultural transmission (such as books or the Internet), algorithmic agents and AI may also play an active role in shaping cultural evolution processes online where humans and algorithms routinely interact."

The crux of this research rests on a relatively simple question: If social learning, or the ability of humans to learn from one another, forms the basis of how humans transmit culture or solve problems collectively, what would social learning look like between humans and algorithms?  Considering scientists don’t always know and often can’t reproduce how their own algorithms work or improve, the idea that machine learning could influence human learning—and culture itself—throughout generations is a frightening one.

"There's a concept called cumulative cultural evolution, where we say that each generation is always pulling up on the next generation, all throughout human history," Levin Brinkmann, one of the researchers who worked on the study, told Motherboard. "Obviously, AI is pulling up on human history—they're trained on human data. But we also found it interesting to think about the other way around: that maybe in the future our human culture would be built up on solutions which have been found originally by an algorithm."

Advertisement

One early example cited in the research is Go, a Chinese strategy board game that saw an algorithm—AlphaGo—beat the human world champion Lee Sedol in 2016. AlphaGo made moves that were extremely unlikely to be made by human players and were learned via self-play instead of analyzing human gameplay data. The algorithm was made public in 2017 and such moves have become more common among human players, suggesting that a hybrid form of social learning between humans and algorithms was not only possible but durable.

We already know that algorithms can and do significantly affect humans. They’re not only used to control workers and citizens in physical workplaces, but also control workers on digital platforms and influence the behavior of individuals who use them. Even studies of algorithms have previewed the worrying ease with which these systems can be used to dabble in phrenology and physiognomy. A federal review of facial recognition algorithms in 2019 found that they were rife with racial biases. One 2020 Nature paper used machine learning to track historical changes in how "trustworthiness" has been depicted in portraits, but created diagrams indistinguishable from well-known phrenology booklets and offered universal conclusions from a dataset limited to European portraits of wealthy subjects.

“I don't think our work can really say a lot about the formation of norms or how much AI can interfere with that,” Brinkmann said. “We're focused on a different type of culture, what you could call the culture of innovation, right? A measurable value or peformance where you can clearly say, 'Okay this paradigm—like with AlphaGo—is maybe more likely to lead to success or less likely.’"

Advertisement

For the experiment, the researchers used “transmission chains,” where they created a sequence of problems to be solved and participants could observe the previous solution (and copy it) before solving it themselves. Two chains were created: one with only humans, and a hybrid human-algorithm one where algorithms followed humans but didn't know if the previous player was a human or algorithm.

The task to solve was to find "an optimal sequence of moves" to navigate a network of six nodes and receive awards with each move.

“As expected, we found evidence of a performance improvement over generations due to social learning,” the researchers wrote. “Adding an algorithm with a different problem-solving bias than humans temporarily improved human performance but improvements were not sustained in following generations. While humans did copy solutions from the algorithm, they appeared to do so at a lower rate than they copied other humans’ solutions with comparable performance.”

Brinkmann told Motherboard that while they were surprised superior solutions weren't more commonly adopted, this was in line with other research suggesting human biases in decision-making persist despite social learning. Still, the team is optimistic that future research can yield insight into how to amend this.

"One thing we are looking at now is what collective effects might play a role here," Brinkmann said. "For instance, there is something called 'context bias.' It's really about social factors which may also play a role, about unintuitive or alien solutions for a group can be sustained. We are also quite excited about the question of communication between algorithms and humans: what does that actually look like, what kind of features do we need from AI to learn or imitate solutions from AI?"