Tech

Even the Pentagon Is Using ChatGPT Now

The Pentagon used ChatGPT to write a press release about a new counter-drone task force.
drones
U.S. Army photo.

ChatGPT, OpenAI’s convincing machine-learning-powered chatbot, is absolutely everywhere lately. Even the U.S. Army is using it, Motherboard has found. 

The Pentagon used OpenAI’s ChatGPT to write an article on February 8 about the launch of a new counter-drone task force. The article flagged that it had been written by AI at the top. 

“The article that follows was generated by OpenAI's ChatGPT. No endorsement is intended,” it said. “The use of AI to generate this story emphasizes U.S. Army Central's commitment to using emerging technologies and innovation in a challenging and ever-changing operational environment.”

The article appeared on the DoD’s Defense Visual Information Distribution Service (DVIDS), an internet repository of news and images generated by the American military. It was about the launch of Task Force 39, which according to ChatGPT, is “an empowered, collaborative group of soldiers dedicated to fostering an innovative culture and pursuing regional and industry partnerships in order to generate future combat efficiency. The team is focused on countering the threat of small Unmanned Aerial Systems and developing innovative solutions to other security challenges.”

Like everyone else, the Pentagon has been making a big show of betting big on AI. Writing articles on DVIDS is the smallest stakes version of what it’s working on. Elsewhere on DVIDS, you can find articles and pictures of the Pentagon’s experiments with autonomous drone swarms. A 2018 Army slide on DVIDS paints a picture of an infantry trading information with AI-controlled drones cruising through a bombed out city. At Joint Base Lewis-McChord, members of the Air National Guard are already using a training system built using artificial intelligence.

The Pentagon has joined the ranks of BuzzFeed and CNET in turning over blogging tasks to a machine. Those decisions caused outrage as they appeared to be cost-cutting measures, and they have had their share of other issues as well. CNET’s article-writing AI introduced factual errors to articles, leading to a legendary 163-word correction. It was also found to have plagiarized writing by other authors.