Google won’t allow people to create deepfakes using its collaborative machine learning platform any longer, according to the company.
Google’s Colaboratory service, known as Colab, allows users to run Python code from their browsers instead of on their own hardware, essentially giving them a free way to access a lot of computing power. People use Colab for everything from running Minecraft servers and neural networks that can automatically recognize handwriting, but the same computing power is also used by some people to create non-consensual deepfake porn.
Google made no official announcement about the change to banning deepfake creation from the service, but as noted by Bleeping Computer, archived versions of its FAQ section show that “creating deepfakes” was added to the list of disallowed activities within the last month.
Last month, apparently shortly before this change was made, Motherboard published an investigation into DeepFaceLab, an open-source project that was used in the creation of viral Tom Cruise deepfake. DeepFaceLab is currently the most popular method for creating deepfakes, including deepfake porn. In fact, Motherboard’s investigation found that DeepFaceLab repeatedly links and sends users to Mr. Deepfakes, the biggest deepfake porn site online, in order to learn how to use the software.
A popular fork of DeepFaceLab is “DFL-Colab,” which allows users to create deepfakes on Google Colab instead of on their own hardware, which requires expensive and hard to obtain graphics cards. Google is now banning DeepFaceLab as part of the new rule.
“We regularly monitor avenues for abuse in Colab that run counter to Google's AI principles, while balancing supporting our mission to give our users access to valuable resources such as TPUs and GPUs,” a spokesperson from Google told Motherboard. “Deepfakes were added to our list of activities disallowed from Colab runtimes last month in response to our regular reviews of abusive patterns.”
Deepfakes have a “large potential” to go against Google's AI principles, the spokesperson said: “We aspire to be able to detect and deter abusive deepfake patterns vs. benign ones, and will alter our policies as our methods progress.” Google’s AI principles state that applications of AI should be “socially beneficial” and should not cause harm.
Motherboard saw that the developer of DFL-Colab, a man who goes by Nikolay Chervoniy, discuss the ban on the DeepFaceLab Discord, but Chervoniy declined to comment for this story.
Emanuel Maiberg contributed reporting to this story.