When computers are left to their own devices, things can get pretty weird. Take, for example, the hallucinogenic ruminations of two jabbering Google Home devices, or the stock market "flash crash" of 2010 that occurred thanks to trading algorithms playing a super-speed game of securities hacky sack with each other.
But what if you just got two algorithms to, like, Thunderdome each other?
The Google DeepMind team did essentially this in a blog published on Thursday, and an accompanying paper. In their experiment, Google engineers placed their algorithms in games that resemble the classic prisoner's dilemma thought experiment: do two players, ignorant to the others' intentions, decide to cooperate or betray each other to maximize their individual outcome?
Read More: AI Could Resurrect a Racist Housing Policy
In the first game, the AIs had to gather scarce apples to receive a reward. They could cooperate, or one player could shoot a beam at the other to briefly take them out of the game and gather more apples for themselves. The second game was similar but more complex. Multiple AIs had to run in a "pack" to catch a target. The AIs could work together to do it, or one could go rogue to capture the target and sacrifice the others.
As it turned out, whether or not the AIs chose to cooperate or betray each other depended on how powerful they were. Basically, if betrayal was the less complicated option, the AI screwed over the other player. If cooperation was easier, banding together was the order of the day, comrades. The more powerful the AI, the more likely it was to cooperate in the game where cooperation was more complex, and ditto for betrayal.
This meant that more powerful algorithms playing the apple game chose to shoot a beam at the other player more often, because it was the more computationally complex option. In the wolfpack game, more powerful AIs chose to work together instead of taking the easier route of becoming a lone wolf and taking the prey down while losing the others.
The DeepMind team concluded that knowing this will help them design future experiments to understand how AIs work together.
But it also raises a concern that really should have been obvious all along. Computers designed to conserve resources are always going to do just that. And would it be wise to tell computers, which use electricity and are manufactured with non-renewable resources, to be inefficient in the first place?
This might not be a problem when you're just trying to load up your favorite website, but if in the future competing AIs are governing stock trades or lawyers or, hell, insurance plans, one can imagine how a human might be on the losing end.
Get six of our favorite Motherboard stories every day by signing up for our newsletter .