Drake or Fake? A Lawyer Explains the Legality of AI-Generated Music

Ghostwriter's viral song offers a crash course in the ways labels, platforms, and artists might sue each other in circles around AI-generated music.
Drew Schwartz
Brooklyn, US
drake weeknd ai song ghostwriter
The day’s biggest questions answered by the people who actually know WTF they’re talking about.

On the off chance you still haven’t heard about the (supposedly) AI-generated Drake song yet, allow me to fill you in: Earlier this week, a pseudonymous poster who goes by “Ghostwriter” claimed in a since-deleted TikTok they had “used AI to make a Drake song feat. The Weeknd.” The song, “Heart on My Sleeve,” racked up millions of listens on TikTok, Spotify, Apple Music, YouTube, and other streaming platforms overnight—and then it disappeared, apparently due to takedown requests from Drake and The Weeknd’s label, Universal Music Group. 


That’s the short version of the story, anyway. Ghostwriter didn’t explain exactly how the song came to be, and no one has been able to corroborate the claim that it was created with AI—or that Ghostwriter created it at all. As far as we know, it could be a bonafide Drake song masquerading as an AI-generated one as a publicity stunt. It could be a remarkably good ape of a Drake song created by a guy who sounds a lot like Drake and has a knack for autotune. As Mia Sato and Richard Lawler write in The Verge, whether it’s “a fluky viral hit, a sloppy stunt by a crypto-adjacent startup, a revenge prank by Drake himself, or the beginning of the legal battle over AI-generated work that is flooding the internet,” all we really know is this: “Something weird is going on.”

No matter the underlying facts, the stratospheric success of “Heart on My Sleeve” raises several thorny legal questions, most of which have to do with copyrights. For argument’s sake, let’s assume that Ghostwriter used AI to make “Heart on My Sleeve.” Did they violate Drake or the Weeknd’s rights of publicity, which govern the unauthorized use of their names, likenesses, images, and—crucially—their voices? Ostensibly, that isn’t Drake’s voice we’re hearing; it just sounds exactly like it. What would a court have to say about that? 


To take things a step further: If this song was made with AI and sounded that much like the real deal, we’ve crossed some major technological threshold. There are bound to be thousands of other songs like this one coming down the pipeline. Who—or what—controls the rights to them? Who gets to choose whether they stay on streaming services or get taken down? Who gets paid for them? And how are old-fashioned, human-generated songs supposed to compete with AI-generated ones, which require significantly less time and money to produce?

To help me wade through the soupy, brain-busting swamp of uncertainty into which the Ghostwriter saga has collectively plunged us, I called up Chris Mammen, an expert on the intersection of intellectual property law, music, and artificial intelligence and a partner at Womble Bond Dickinson. Fix yourself a drink or something—things are about to get really heady, really fast.

VICE: I want to start off by talking about the Ghostwriter song. What legal questions did it raise for you, particularly regarding copyright issues?
Chris Mammen:
There’s a whole lot to break apart there. I guess a starting place would be: How would we think about this in the absence of AI creating it? We’ve got cover bands and impersonators and things like that, which have been around for years. So what if somebody recorded an original composition and, through a selection of artists and/or some autotune, made it sound just like Drake featuring The Weeknd? Then we’d think about the outputs and sort of layer on the fact that AI was used to create it. 


So what would happen if somebody did this without any AI tools?
It depends on what it’s purporting to be. If they published it and they said, “Hey, this is a new tune by Drake featuring The Weeknd,” they’re trading on both the sound of the voices as well as the name, and they’re fostering confusion and misrepresenting who it’s from. That’s one extreme where there are some potentially significant issues. 

If you say, “This is a tribute band that sounds like Drake featuring The Weeknd,” you’re invoking the artists’ names, and so you may be trading on that, but you’re also being clear that it’s not them. There’s sort of an evolving question in the law about how those are to be treated. 

What if, instead, they said, “This is a work by Shmake,” and they didn’t actually use the Drake name and just left it to the audience to make the connection? Or a further extreme: They just said, “Hey, this is a great new track by an anonymous artist.” How much are you trading on the name, image, likeness, and similarity with the established artist in a way that causes confusion that maybe dilutes the value of the artist’s brand?

The title on the TikTok post sort of straddled that ambiguously. It said, “Hey, look what I created using AI: It’s Drake featuring The Weeknd,” or something like that. It’s somewhere between, maybe leaning towards, “Hey, this sounds just like Drake,” or, “This is a tribute”—the equivalent of a tribute band.


When AI enters the equation, how does that change things?
The big question is, was the AI trained with a bunch of Drake’s music? Was that fed in to get an output that sounds like him? Or was it purely mechanical: taking some other recording and tinkering until it sounded similar enough? Since Ghostwriter purports to have used AI to do it, I’m going to infer that the AI was trained on Drake’s music. There’s a big question that’s pending right now about how much it constitutes fair use, under copyright law, to use copyrighted material or protected material as training data. 

“There's a big question that's pending right now about how much it constitutes fair use, under the copyright law, to use copyrighted material or protected material as training data.”

Could you briefly explain the concept of fair use and how it relates to AI and music?
A copyright protects any original work of authorship, which could be writing, photographs, movies, music—those kinds of things. Copying is not permitted. However, some categories of use of copyrighted material are permissible and considered “fair use.” They include things like education, commentary, parody, and so forth. If you’re using copyrighted material as training data, you’re generating a new output—you’re not generating a photograph or a song or something that’s an exact copy of what was put in. So the question is: Is that impermissible use or fair use of the material?


The way AI work with things like this is that you have the algorithm, and you put in a whole bunch of data, and then the algorithm is able to use that data, and the way that it has analyzed or parsed the data, to generate outputs. For example, if it’s a music-generating AI, and some of the inputs are tagged as “Drake,” then when you enter a prompt into the AI that says, “Give me an output that sounds like Drake,” it’s going to focus on extracting what the algorithm recognizes as characteristic aspects of the music that’s tagged as “Drake.”

There are several lawsuits going on right now about this in contexts other than music. One of them was filed by Getty Images. The complaint is full of examples of Getty Images’ registered copyright photos and then outputs from the defendant AI that look like those photos. Sometimes there’s even a distorted Getty Images watermark on the output photos. There’s also a class action by some independent photographers who claim that their works have been used in generating AI images. And then there’s another lawsuit involving computer code, where the allegation is that the AI was improperly trained on source code repositories without the permission of the users. 


Those three cases really tie into how one may go about thinking about the use of famous recording artists’ works to train the AI. As a general proposition, a ruling in those cases that the use of copyrighted works as training data is fair use would tend to give a significant measure of protection to folks like Ghostwriter. 

If it goes the other way, people are going to be spooked out about the prospect of doing something like Ghostwriter did. 
Yeah, exactly. I expect to get some kind of indication of where things are going with those cases over the next six to 12 months. But given the rate of change and how much has happened in the generative AI space in the last six months, the questions in those cases might well be obsolete by the time we get to rulings. 

Do you think that Universal Music Group could take legal action against Ghostwriter? What would their claims look like? 
The two places where I would look for potential claims would be around this copyright and fair use question that I was talking about: Was Drake’s body of copyrighted work used to train the AI? And then the other place I would look would be name, image, likeness, and right of publicity around the publication of the resulting work and whether it inappropriately trades on Drake and The Weeknd’s existing rights and their brands. I would expect that second one to be analyzed, as I suggested, along the body of law that has developed around tribute bands and cover bands and so forth.


Correct me if I’m wrong, but generally speaking, you can’t just take a recording of someone’s voice without their authorization, put it in a song, release it, and make money off of it. That’s a violation of their rights of publicity.

How does that apply to when you generate someone’s voice through an AI and then put it in your song? It’s not a literal recording of their voice, but it is their voice… sort of?
Right now, everybody’s struggling with how to think about that question. Because it’s not literally a recording. We have a precedent for impressionists, who can go out there and make a living sounding just like celebrities. That’s not a violation of anything. This is something in-between. Is there anything wrong with it? And if so, what exactly? For right now, the question is focusing on the use of training data. There may be further questions. What if you used a training data set that specifically excluded all of Drake’s music, and somehow you were still able to enter a prompt into that AI that could generate something that sounded like Drake? Is there still a problem with that? Maybe. 

“In the currently pending lawsuits, the defendants that have been named are the platforms—the AI platforms that ingested the training data and used the training data to train the algorithm.”


If Universal decided to sue, is Ghostwriter liable when this song was created by a machine? Is the machine liable? What do you even do with that? 
In the currently pending lawsuits, the defendants that have been named are the platforms—the AI platforms that ingested the training data and used the training data to train the algorithm. Not necessarily the individual users who entered a prompt or otherwise used the tool to generate an output. You’d need to know more about how Ghostwriter generated this: Did they build the algorithm and do something fairly custom or run it on an otherwise available platform?

There’s always the opportunity in lawsuits for either side to say, “Hey, this other party should be brought into the lawsuit.” In this hypothetical lawsuit, maybe Ghostwriter files what we call a cross-claim to add the platform, where whatever Ghostwriter is liable for, the platform should be liable for. Or he could suggest that the plaintiff should add them as a defendant. 

Let’s say that Universal doesn’t decide to take legal action here. Put yourself in Ghostwriter’s shoes: You made this song, people loved it, it was doing really well, and it was maybe going to give you a career. Now it’s been taken off of streaming platforms, and no one can access it. Could Ghostwriter sue Universal—and maybe Spotify, Apple Music, and YouTube—and argue that he didn’t violate anyone’s copyright and that this should be re-uploaded?
It’s an interesting question. There are a variety of processes that can be followed if someone has stuff that’s taken down and they say, “No, this should be allowed to go back up.” There’s a related question of whether Ghostwriter’s use was commercial use or commentary. That kind of goes to the fair use question—it goes to a lot of the questions. And in a way, that’s also kind of ambiguous here. On the one hand, posting it and having it go viral is arguably not commercial in the same way that trying to sell copies is. On the other hand, if it’s on Spotify and you’re getting a fraction of a penny per play, or if you’re getting advertising revenue from the volume of hits on YouTube, is that sufficiently commercial? There are a whole lot of very complex questions around this. 


To what degree is the law surrounding AI and music just a crazy morass of a gray area?
It’s totally a crazy morass of a gray area. The law is an institution that moves slowly. And the law evolves by analogy. Something new comes up, and we figure out what it’s analogous to, and then that gradually becomes settled law. What’s happening right now is this is changing so fast that it’s hard even to come up with the analogies to figure out how we want to think about it before it changes again. And if there aren’t any, what new legal doctrines do we need to develop in order to address this issue?”

I’ve been debating copyright issues for a while, and before that, I was engaged in debates about related issues concerning whether AI algorithms can be named inventors on patents. I think, at the root, the question for both of these boils down to: What do our intellectual property laws protect? Are they there to protect human creativity and human innovation? Or are they there as a matter of economic and industrial policy to make sure that those who create things that have some commercial value are best able to protect and exploit that commercial value? 

Is there a pathway toward having a big, good-faith discussion with all of the major stakeholders to talk about what we want the world to look like as it relates to AI and IP rights?
I don’t think there’s a path to having that happen in some sort of a Constitutional Convention kind of way, where everybody comes together and hammers it out. But that conversation is going on as we speak more diffusely. Part of it is conversations like we’re having today. Part of it is the lawsuits. There’s academic work going on. All of these pieces contribute to insights about how the questions should be resolved and how we should think about them. And over time—again, much more slowly than the technology evolves—some things will bubble out from that. Somebody will have a really good insight, and that’ll catch on and help frame the paradigm for everybody else. 


What, if anything, has the US Copyright Office said so far about its stance on AI and music?
I think this training-data question remains pretty open. The copyright office earlier this year issued a new circular clarifying that works that are created purely by AI—including a human typing a prompt into a generative AI algorithm and the outputs from that—are not copyrightable. The copyright office has long had a position that to be copyrightable, there has to be sufficient human involvement in the creation of the work. 

Right now, there are a couple of cases that have been talked about a lot in the press. There’s a graphic novel, and the author of the graphic novel wrote all the text and then plugged prompts into a generative AI tool to create the illustrations. The copyright office found that the graphic novel as a whole was copyrightable but that the individual images were not copyrightable because they were generated by an algorithm. There’s also a case pending where this AI algorithm called DABUS purportedly generated some pictures and the owners of the computer on which DABUS resides filed copyright registrations. The copyright office rejected those, saying there wasn’t enough human involvement. The owners of the AI algorithm have filed suit, and that case is making its way through the courts. Interestingly, in the UK, there’s copyright protection for computer-generated works. It’s a shorter period than for human-generated works, but since the 1980s, they’ve allowed copyright protection on these things.

It gets really, really finely parsed. The question then becomes: Are we granting rights to the AI, or are we granting rights to someone involved with works that were created by the AI? In other words, are we saying that the AI should own the copyright or the patent, or are we saying that some human or some institution, like a corporation, should be able to own the rights to things that were created by the AI? How we answer that question and how we frame that informs what we think about the whole process. 

When we talk about the question of whether you can copyright a song made by AI, really, the main question is: How much of it was you and how much of it was the AI? 
Exactly. And if you want to copyright it, you’re going to want to include in your supporting materials as much detail as you can about how much involvement you had. If you just log into ChatGPT and say, “Write me lyrics for a new song,” it’s almost certainly not going to be enough. 

How big of an issue do you think AI and the legal questions it raises will become in the music industry over the next five to ten years? 
I think it’s a really big issue. And it stems from the algorithms’ ability to generate high-quality stuff at a scale and a pace that humans just can’t keep up with. Part of the question is, is it generating stuff that is deemed popular because it sounds like our favorite human artist? Or are we going to move very quickly to a place where it starts generating stuff that doesn’t sound like anybody, doesn’t purport to be sounding like anybody—but it’s catchy and popular and gets cranked out at a huge pace? What does that do for the whole music industry? What does that do to the traditional pricing models? Does it dilute the value of human-generated works? It’s going to upend the whole marketplace. Instead of releasing an album a year, an AI could release 100 albums in a day. A thousand albums in a day. 

Are attorneys who represent musicians and other creators freaking out about AI? What are those conversations like?
They’re very much like the conversation we’re having today. It’s like, how do we even think about it? On the patent side of things, there are cases pending in countries around the world, including England. When the AI-as-patent-inventor issue came up, the judges looked to Blackstone, who was a legal commentator in the 18th century. He wrote about whether the owner of a piece of property has a tree that produces an apple—whether that property owner owns the apple. That’s the analogy they’re using to understand whether the owner of a computer that has an algorithm that generates a useful invention would likewise own that invention. They’re literally trying to draw an analogy between the output of the AI algorithm and an 18th-century apple. 

So what’s the alternative to the property owner owning the apple? Is the tree owning the apple? 
Or: It’s not owned. One of the alternatives is if you have to have an inventor under US law and there are no humans involved in creating it, then maybe it’s not a patentable invention. Questions like that expose people’s instincts about whether we are here to encourage human flourishing or to encourage industrial policy and extract value. If your reaction to that is, “Well, how can we have something that’s useful and valuable, but nobody owns it? We can’t extract value from it”—that shows some sympathies with the industrial policy argument. Whereas if you say, “There are no humans involved, so, of course, there’s no human inventor,” it shows some sympathies with the other side.

Let’s say you land there: It’s not patentable, it’s not copyrightable, it’s just this apple. If someone like Universal sues or takes a song off of streaming platforms or otherwise deprives the public of that work, who’s fighting for the song? No one’s making money off of it. The arguments about illegally exploiting the copyright and rights of publicity and all of that stuff fall apart if no one’s making money, right? It’s not commercial exploitation.
That’s a very fact-specific question, and—bringing it back around— it may or may not match the facts of Ghostwriter. If it’s on Spotify, or if it’s on an ad-supported streaming platform, is he making money off of it? Is that enough to make it commercial use? I don’t know.

Correction (4/21): An earlier version of this article misstated the name of Chris Mammen’s firm. He is a partner at Womble Bond Dickinson.