We've not reached the stage of autonomous killing machines quite yet, but the Pentagon will soon employ machine learning algorithms to help intelligence analysts identify Islamic State fighters in thousands of hours of drone surveillance footage, Defense One reports.
For those on the wrong end of the lens, it could mean the difference between life and death is ultimately decided by artificial intelligence.
But opponents to the technology argue that automation bias and algorithms that lack certain human capabilities in decision making could lead to false conclusions.
"Humans have the tendency to switch to automatic forms of reasoning when presented with computer generated options and true human deliberation falls to the background," Daan Kayser, autonomous weapons expert at Dutch NGO PAX, told Motherboard.
Nevertheless, the Pentagon is going all in. The plan is part of Project Maven, the Department of Defense's venture to explore artificial intelligence, big data, and deep learning technologies.
As Defense One explains, thousands of US military and civilian analysts are deluged with the sheer volume of surveillance footage being recorded over Iraq, Syria, and other regions where the US has deployed drones. The drones, like the MQ-9 Reaper, are equipped with high-definition cameras that the analysts use to look for suspect activity. The analysts then note any such activity manually.
But there's just too much data, and according to Air Force Lt. Gen. John N.T. Shanahan, one of the directors behind the Maven initiative, the analysts need to be relieved by artificial intelligence, not an increase in staff.
"We're not going to solve it by throwing more people at the problem…That's the last thing that we actually want to do," he told Defense One. "How do we actually begin to automate that in a way that gives time back to analysts who otherwise spend 80 percent of their time doing…mundane, administrative tasks associated with staring at full-motion video."
But the program, spearheaded by Project Maven's 'Algorithmic Warfare Cross-Functional Team', is a prime example of the ongoing removal of humans in the decision-making process; a shift that is being seen across many sectors.
Jessica Dorsey, program officer at PAX, told Motherboard that this shift leaves a lot of questions about transparency and accountability with respect to any potential mistakes made in algorithmically analyzing data for targeting purposes.
Dorsey drew Motherboard's attention to a passage in the Defense One article that read:
"The Algorithmic Warfare team is working hand in hand with the Pentagon's Strategic Capabilities Office, a group modify existing weapons and technology to make them more versatile and lethal."
Dorsey told Motherboard, "There is no information about how humans will retain meaningful control over the decision-making process as these technologies become more modified and more lethal."
Subscribe to Science Solved It, Motherboard's new show about the greatest mysteries that were solved by science.