Tech

Google’s ‘Democratic AI’ Is Better at Redistributing Wealth Than America

Researchers built an AI that gives out money based on who started with less resources—and humans preferred it.
Janus Rose
New York, US
A robotic hand holding a pile of $20 and $1 American dollar bills
Getty Images

It’s no secret that the overwhelming majority of wealth in the United States is concentrated at the very top, creating staggering levels of poverty and inequality that vastly outpace other supposedly “wealthy” nations. But while the current political system ensures that this upward extraction of wealth continues, AI researchers have begun playing with a fascinating question: is machine learning better equipped than humans to create a society that divides resources more equitably?

Advertisement

The answer, according to a recent paper published in Nature Human Behaviour from researchers at Google’s DeepMind, seems to be yes—at least, as far as the study’s participants are concerned.

The paper describes a series of experiments where a deep neural network was tasked with divvying up resources in a more equitable way that humans preferred. The humans participated in an online economic game—called a “public goods game” in economics—where each round they would choose whether to keep a monetary endowment, or contribute a chosen amount of coins into a collective fund. These funds would then be returned to the players under three different redistribution schemes based on different human economic systems—and one additional scheme created entirely by the AI, called the Human Centered Redistribution Mechanism (HCRM). The humans would then vote to decide which system they preferred.

It turns out, the distribution scheme created by the AI was the one preferred by the majority of participants. While strict libertarian and egalitarian systems split the returns based on things like how much each player contributed, the AI’s system redistributed wealth in a way that specifically addressed the advantages and disadvantages players had at the start of the game—and ultimately won them over as the preferred method in a majoritarian vote.

Advertisement

“Pursuing a broadly liberal egalitarian policy, [HCRM] sought to reduce pre-existing income disparities by compensating players in proportion to their contribution relative to endowment,” the paper’s authors wrote. “In other words, rather than simply maximizing efficiency, the mechanism was progressive: it promoted enfranchisement of those who began the game at a wealth disadvantage, at the expense of those with higher initial endowment.”

Figures from Google's research paper illustrating an economics game where players contribute coins into a public fund.

Figures from Google's research paper illustrating an economics game where players contribute coins into a public fund.

The methods differ from a lot of AI projects, which focus on establishing an authoritative “ground truth” model of reality that is used to make decisions—and in doing so, firmly embeds the bias of its creators.

“In AI research, there is a growing realization that to build human-compatible systems, we need new research methods in which humans and agents interact, and an increased effort to learn values directly from humans to build value-aligned AI,” the researchers wrote. “Instead of imbuing our agents with purportedly human values a priori, and thus potentially biasing systems towards the preferences of AI researchers, we train them to maximize a democratic objective: to design policies that humans prefer and thus will vote to implement in a majoritarian election.”

Of course, we don’t need an AI to show us that more sustainable ways of living are possible. On a smaller scale, mutual aid and community organizations that redistribute resources have existed forever. So has scientific evidence showing that—contrary to the dogma of hyper-competitive capitalism—human beings are naturally predisposed toward cooperation, sharing, and collective prosperity. 

While the AI’s system was preferred by human participants, that doesn’t necessarily mean it would equitably satisfy the needs of humans on a larger scale. The researchers are also quick to point out that the experiments are not a radical proposal for AI-based governance, but a framework for future research on how AI could intervene in public policy.

“This is fundamental research asking questions about how an AI can be aligned with a whole group of humans and how to model and represent humans in simulations, explored in a toy domain,” Jan Balaguer, a DeepMind researcher who co-authored the paper, told Motherboard. “Many of the problems that humans face are not merely technological but require us to coordinate in society and in our economies for the greater good. For AI to be able to help, it needs to learn directly about human values.”

Update: This article has been updated with the correct name of the scientific journal where the study was published; it is Nature Human Behaviour, not Nature.