Kim Kardashian.
Screenshot via Instagram
Tech

The Kim Kardashian Deepfake Shows Copyright Claims Are Not the Answer

The deepfake of Kim Kardashian was taken down by Condé Nast, but it's an example of how copyright laws are a poor replacement for real moderation.

A week after a deepfake video of Mark Zuckerberg went viral and forced Facebook to fact check art, YouTube has removed a satirical deepfake of Kim Kardashian by the same creator from its platform.

But unlike the Zuckerberg video, which sparked a conversation around how platforms define misinformation versus satire, the Kardashian video was taken down using one of the internet’s bluntest weapons: a copyright claim, in this case from Condé Nast, which created the original video the deepfake used.

Advertisement

The news shows that copyright claims can be an effective way to quickly remove deepfakes from internet platforms, but raises an important question: Should copyright holders have the right to remove a deepfake that was created to make a political statement?

The Kardashian deepfake was created by the same group of artists who brought the world the viral deepfake of Zuckerberg bragging about controlling Facebook users' data. Like the Zuckerberg deepfake, the Kardashian deepfake was commenting on the power social media companies have over their users.

"When there's so many haters, I really don't care because their data has made me rich beyond my wildest dreams," Kardashian says in the manipulated video.

The Zuckerberg deepfake was flagged by two of Facebook's fact-checking partners and its distribution was limited on Facebook and Instagram, but the social media giant decided not to remove it, opting to flag it as false and let users decide for themselves.

The Kardashian deepfake is also still up on Instagram, but not on YouTube because Condé Nast, the media company that published the source video, filed a Content ID claim with YouTube. Content ID is a system on YouTube that allows copyright holders to block, monetize, or track a video that infringes on their copyrights. In the case of the Kardashian video, Condé Nast chose to block the video.

The Kardashian deepfake was created by manipulating a video interview titled "73 Questions With Kim Kardashian West" published by Vogue in April. The original video is 11 minutes long, but the portion used to train the deepfake is taken from a one-minute scene within that interview:

Advertisement

Bill Posters, one of the artists who created the Zuckerberg and Kardashian deepfakes, told Motherboard that he received an email notification from YouTube alerting him that Condé Nast had requested the video be blocked on June 12, the day after Motherboard published its article on the Zuckerberg deepfake.

"We would have thought that our artworks are covered under artistic and satire use protection under UK copyright law, however the video has been blocked in all territories," Posters said.

Condé Nast declined to comment on this removal before publication.

The problem with using a copyright claim against the Kardashian deepfake was that its creators didn't just reupload Vogue's original video. They deliberately manipulated the video to make a statement. In copyright law, if a work is transformative in nature—such as parody—it's likely to be considered fair use, and not a copyright infringement. Fair use includes content such as criticism, comment, news reporting, teaching, scholarship, and research. Fair use also applies if only a small part of a whole copyrighted work is used in the new creation; in this case, only around one-tenth of the original video was used to make the deepfake.

Electronic Frontier Foundation policy analyst Joe Mullin told Motherboard that the video's creators probably have a good case to argue that their work is fair use.

"The deepfake video uses a small part of the original, and transforms it substantially. It's not a replacement for the original video and it's hard to imagine that it would hurt the market value of the original," Mullin said. "Unfortunately, copyright owners don't always consider possible fair use cases before sending DMCA takedowns that censor speech—even though the 9th Circuit's Lenz decision makes it clear that they must do so."

Advertisement

"For transformative work, there are a variety of factors to consider," Suzie Dunn, faculty of law at the University of Ottawa, told Motherboard. "You can't just call it art or call it parody to make it transformative. It has to have some sort of new and meaningful expression to it. You can't just mock to mock, you need to have a message or meaning behind it."

Artists sometimes need to issue takedowns to stop bootlegged versions of their work from spreading—as must porn performers whose work is ripped and reposted to tube sites like Pornhub, and revenge porn victims who want to scrub the internet of harmful, damaging content. Individuals can't always spend hours or days tracking down videos and issuing notices, but a company like Condé Nast has more resources.

"I do think, whether we call it privilege or just the loud and the visible, platforms are highly unresponsive to most people," Sam Gregory, program director at international human rights organization WITNESS told Motherboard. "And they're incredibly un-transparent…[users] feel like they're talking into a black hole when they talk to a platform, because what's happened to their content is not considered important."

Dunn said that in the case of the Kardashian deepfake, she also thinks there could be a strong argument for fair use—with a caveat.

"I think the issue with using copyright to address deepfakes, is that it doesn't get to the core of the issue," she said. "Deepakes aren't really about copyright to a particular image [i.e. the original Vogue video] but what right people should have to control their representations online. We should be looking to how this impacts a person's autonomy and integrity, rather than looking to copyright for solutions to deepfakes. Copyright is more likely to protect celebrity content like this, but will be more challenging for the everyday target of deepfake to make a claim."

Advertisement

When deepfakes first began, this challenge was very clear: The law wasn't, and still isn't, prepared to handle content like revenge porn and misuse of non-famous people's images. Celebrities and public figures like Kim Kardashian can force a takedown through rights of publicity, which protect their likeness from misuse, but often the most the average person can do is sue for defamation, under civil laws that apply, such as intentional infliction of emotional distress and defamation.

"When people want to get material taken down from social media, because they disagree with its message, copyright claims are almost never the way to do it," Mullin said.

Copyright claims are just a Band-Aid for issues of consent and ownership of our own likenesses online, and our personal data, and they're too prone to misuse by powerful entities. These are issues the art group that created the Zuckerberg and Kardashian deepfakes attempted to critique with its work, in the first place.

The creators can file an appeal, but Posters said that the group is still deciding whether it's even worth appealing the takedown.

"The important point is not whether YouTube or Facebook think these AI generated videos are art or not, the question is: what happens to contemporary art that is critical of their practices when it is inserted into the corporate spaces of social media giants like Facebook, Instagram or Youtube?" he said. "[These are] spaces that pretend to be public spaces, that profess to safeguard free expression, when the reality is evidently something quite different."