Bizarre AI-generated videos of people expressing support for Burkina Faso’s new military junta have appeared online, in what could be a clumsy attempt to spread pro-military propaganda.
It’s unclear who created the videos, but it appears they are being shared via WhatsApp.
One of the videos was shared widely on Twitter this week when it was shared by Lauren Blanchard, a specialist in African affairs at the Congressional Research Service in the US. Blanchard said that the video had been circulating on WhatsApp but did not know who had made it.
“Hello to the African people and particularly to the Burkinabe people. My name is Alisha and I’m a Pan-Africanist,” says one figure in the video, which VICE World News has confirmed was created by the AI video generator Synthesia.
Synthesia said that it had banned the user who created the videos but would not reveal their identity.
The AI-generated avatar in the video continues: “I appeal to the solidarity of the Burkinabe people, and the people of Burkina Faso to effectively support the authorities of the transition.”
“Let us all remain mobilised by the Burkinabe people in this struggle. Homeland or death. We shall overcome.”
The video has appeared alongside a similar clip, featuring several different avatars but reading the same script, also created using Synthesia.
The second video was posted on Facebook on the 23rd of January by Ibrahima Maiga, who describes himself as a Sahelian political scientist from Burkina Faso. Maiga, who has posted in support of the country’s new regime, told VICE World News via Facebook Messenger that he had found the video on WhatsApp before posting it to Facebook, and that he was not aware that it had been created with AI.
He appears to have deleted the video shortly after VICE World News contacted him.
Burkina Faso is currently ruled by Interim President Ibrahim Traore, who seized power in a coup in September. The army now runs the West African country, having claimed that previous governments had failed to halt the spread of jihadist violence.
Some users on Twitter attributed the videos to the Wagner Group, the Russian private military company, without evidence.
Wagner is one of the world’s most secretive and brutal mercenary groups, and has worked with governments in Mali and the Central African Republic. Burkina Faso’s military authorities have previously denied allegations that they’ve hired the mercenaries to help them fight an Islamist insurgency, but this week France agreed to a request from its former colony to withdraw all troops from the country.
While Russia has been known to deploy deepfakes during the war in Ukraine, it’s not known if Wagner has ever used AI to generate videos such as these.
Tracy Harwood, professor of digital culture at De Montfort University in the UK, told VICE World News that she’d be surprised if anyone really fell for the Burkina Faso videos.
“With the fallout from the evidence of Russian influence in manipulating political outcomes, who would really trust any online content in support of a political standpoint – the people that political issues impact need ground truth, not virtual truth,” she said. “These videos totally smack of keyboard warriors doing their thing as far from the action as they can possibly be!”
A video on how AI avatars are created on Synthesia’s YouTube account features one of the same avatars seen in the Burkina Faso videos, which says: “A team of content moderators makes sure that nobody breaks the rules, so it is impossible to spread misinformation, disinformation or obscenity through videos created by Synthesia Studio.”
The CEO of Synthesia, Victor Riparbelli, declined to tell VICE World News who created the videos, but said: “We have strict guidelines for which type of content we allow to be created on our platform and we deeply condemn any misuse. The recent videos that emerged are in breach of our ToS and we have identified and banned the user in question.
“As a company, we invested very early into content moderation to ensure our tens of thousands of customers benefit from a safe platform.”
He added: “Cases like this highlight how difficult moderation is. The scripts in question do not use explicit language and require deep contextual understanding of the subject matter. No system will ever be perfect, but to avoid similar situations arising in future we will continue our work towards improving systems.”