Kalashnikov Group, the Russian company behind the iconic AK-47 assault rifle, claimed it has invented an artificial intelligence capable of identifying targets on the battlefield and making decisions.
That's right: a military AI, presumably with the power to decide for itself who the enemy is and whether to attack. It's a startling claim.
But don't panic. There are good reasons to doubt the AI—assuming it's real—is actually very useful in combat.
"In the imminent future, the Group will unveil a range of products based on neural networks," Kalashnikov spokeswoman Sofiya Ivanova told Russia's TASS news agency on July 5.
Ivanova said the company would debut a "fully automated combat module featuring this technology" at Russia's Army-2017 military trade show in August.
Kalashnikov is also developing robotic combat vehicles that could, in theory, include the company's new AI.
But as alarming as it sounds that one of the world's leading arms-makers is producing intelligent war robots, in reality this technology has been around for a long time. And yet _Terminator_-style autonomous killing-machines haven't overrun the planet.
"Neural nets already exist and have been trained to identify categories of things, including faces," Patrick Lin, a roboticist at California Polytechnic State University, told Motherboard in an email. "And autonomous robotics also exist."
But Kalashnikov's claim that the weapon is based on neural network technologies is vague, according to Lin. "What exactly is it learning to do? It could be as simple as recognizing humans, or as complicated as recognizing an adversary who's carrying a weapon," Lin said.
"This matters," he added, "because it affects reliability and predictability. Without them, there's a major loss of meaningful human control. Without that control, the system has very little value, unless you want to simply shoot at everything within its range."
Kalashnikov did not respond to an email seeking comment.
Peter W. Singer, author of Wired for War, a nonfiction book on military robotics, said Kalashnikov could be playing fast and loose with its terminology. "AI has such a fluid definition, let alone across languages," Singer told Motherboard over email. "Are they working on more autonomous systems? Without a doubt. Is it fully autonomous? Without a doubt not."
In weaponry, full autonomy can actually be a liability. The US military, for one, has been building highly-autonomous target-detection into a range of weapon systems for many years, but still requires a human being to monitor these autonomous systems, second-guess the machines' results, and give the final approval to open fire.
That's one reason why the Air Force's missile-armed Reaper drone, for instance, requires two human operators. "There will always be a need for the man-in-the-loop," an Air Force drone operator told Foreign Policy. "There's always going to be people involved, making decisions for all this to work."
In any event, it pays to be skeptical of these claims regarding military robotics. Of the world's leading military powers, Russia has been the slowest to field sophisticated autonomous systems.
The US Air Force began flying Predator drones in the mid-1990s and armed them in 2001. The Russian air force operates a few unmanned aerial vehicles, but still hasn't added weapons to them.
_Watch **INHUMAN KIND, Motherboard's 2015 doc on artificially intelligent machines and the future of war and the human race:_**
Get six of our favorite Motherboard stories every day by signing up for our newsletter.