Inside the Discord Were Thousands of Rogue Producers Are Making AI Music
Album art for UTOP-AI
Tech

Inside the Discord Where Thousands of Rogue Producers Are Making AI Music

On Saturday, they released an entire album using an AI-generated copy of Travis Scott's voice, and labels are trying to kill it.

Last week, a viral track that used AI to create an original song using Drake and The Weeknd’s voices went viral and gained millions of listens across the internet before being taken down after a major label complained. The success of "Heart on My Sleeve" has a lot of people wondering whether it represents the looming future of music, but it looks a lot more like the present: there are hundreds of other AI songs populating across social media and streaming platforms, and an entire community online dedicated to making AI music. 

Advertisement

These songs include both original tracks and covers, such as Rihanna singing “Cuff It” By Beyoncé, or Drake and Kanye West singing “WAP” by Cardi B. and Megan Thee Stallion, and rights holders are moving as fast as they can to take them down. On Saturday, a group of music producers and songwriters even released an entire album using AI-generated versions of rapper Travis Scott's voice and other artists, called UTOP-AI. The album was taken down three hours after being released on YouTube due to a copyright claim from Warner Music Group. It was then uploaded to Soundcloud, but was quickly taken offline there.

As AI music becomes more accessible and popular, it has become the center of a cultural debate. AI creators defend the technology as a way to make music more accessible, while many music industry professionals and other critics accuse creators of copyright infringement and cultural appropriation. 

A Discord server called AI Hub hosts a large community of AI music creators behind some of the most viral AI songs. This server was created on March 25 and now has over 21,000 users. AI Hub is dedicated to making and sharing AI music and teaches people how to create songs, with guides and even ready-made AI models tailored to mimic specific artists' voices available to new creators. People can post songs they make and ask troubleshooting questions to each other. 

Advertisement

“I never really expected the server to grow how it did. In only a month, the group has grown to twenty thousand members. It's pretty surreal, since our server has accidentally become the hub for a huge new technology,” the creator of AI Hub, who goes by the pseudonym Snoop Dogg, told Motherboard. “I've had people I know IRL bring up AI stuff that I've made here just for fun. At the start of the server it was mostly me making models, but our community has made more than 70 of them by now.” 

While using AI to transform a nobody's voice into a superstar's might seem arcane, it's shockingly easy. Using the instructions posted in the Discord, Motherboard tried two different ways to create AI covers and found it can be done in just a few minutes. This is possible because members of the Discord server have created code templates that can be run on Google’s Colab platform and AI voice models of over 30 popular singers that can be inserted into the template. 

To create a cover of a song using a different singer’s voice, you start by downloading a song from YouTube, then separate the backtrack from the acapella vocals using one of several free websites, transform the acapella audio file into a new voice using AI, and then put the two tracks together using music editing software. 

“What I like about AI music is the freedom it gives," a music producer in Ukraine who goes by Wonderson told Motherboard. "Every producer dreams of hearing how his beat will sound with Drake or Kendrick or Westside Gunn. But artists are few, and producers and songwriters are millions. Even the most talented of them will never get to work with every artist he or she is interested in. But artificial intelligence can fix that,” Wonderson said, adding that as a producer in Ukraine, it has been particularly difficult for him to get into the Western music industry.

Advertisement

“I can see the same freedom for listeners. Look at how much new content has been created based on AI covers. A lot of tracks have gotten a second chance and even a new interpretation, and some of them sound even better than the original,” he added. 

Many members of this community have dedicated a lot of their time to constantly improving AI voice models, with new versions released regularly. To them, making AI music is a hobby through which they create tracks they envision without needing the resources that were once required to do so. At the same time, videos of AI tracks are often taken down as soon as they're posted and labels and publishers are gearing up to tackle this new issue in the music industry.  

The copyright issue in AI music is being heavily debated following the success of “Heart on my sleeve,” which was created by an anonymous producer called Ghostwriter, who wrote and recorded the song, and used AI to replace his vocals with Drake's and The Weeknd's. After seeing this, Universal Music Group (UMG), where both Drake and The Weeknd are signed, flagged the song and AI content to music streaming services where it was immediately removed. 

“These instances demonstrate why platforms have a fundamental legal and ethical responsibility to prevent the use of their services in ways that harm artists,” UMG told Motherboard in a statement about “Heart on my sleeve.” 

Advertisement

Ghostwriter, for their part, claimed that they were (fittingly) a ghostwriter in the music industry but were not fairly compensated while labels profited.

In March, UMG told streaming platforms including Spotify and Apple to block AI apps from taking melodies and lyrics from their copyrighted music and told the platforms that AI systems have been trained on copyrighted content without obtaining the required consent from the people who own and produce the content. UMG Executive Michael Nash also published an op-ed in February where he wrote that AI is “diluting the market, making original creations harder to find and violating artists’ legal rights to compensation from their work.” 

“People are deeply concerned by AI but many also acknowledge that AI as a tool is a good thing to increase workflow, navigate creative block and become more efficient,” Karl Fowlkes, an entertainment and business attorney, told Motherboard. “There are a lot of positives to AI in the music industry but generative AI is something that all stakeholders in the industry need to attack. UMG's notice to [streaming platforms] was a major domino publicly”. 

In an attempt to get around this thorny issue, the rules of AI Hub on Discord include  “no illegal distribution of copyrighted materials such as leaks, audio files, and illegal streaming,” and “no violating anyone’s intellectual property or rights."

Advertisement

There are now many ways to transform a song’s vocals into a new voice. The original way was to run code on a Colab page that the mods of the server created. Then, someone created a Discord bot on another server called Sable AI Hub in which you can run the model using text commands. Now, there is also the first music AI creation app called Musicfy that allows users to directly import an audio file, choose an artist, and export the new vocals. 

This app was made by a student hacker who goes by the online pseudonym ak24 and is also a member of the AI Hub community. The app saw over a hundred thousand uses a day after a day launching, he said. “This is going to be completely free. The way I'm thinking right now is creating a platform for people to create AI music of whatever they want. But using these models—the Drake, Kanye, and famous people models—I'm not gonna profit off of these,” he told Motherboard. 

“I love how AI music lets us transform existing songs and create new songs. It's great to have been a fan of many artists, and now being actually able to make new material if they don't drop often,” an admin of AI Hub and manager of the Discord’s corresponding YouTube channel Plugging AI, who goes by Qo, told Motherboard. “Traditional music will always be superior but AI music to me is just a cool way for fans to appreciate and conceptualize new ideas, the possibilities are almost limitless.” 

Advertisement

UTOP-AI, the album created by the Discord community, features original songs using AI-generated vocals from famous artists including Travis Scott, Drake, Baby Keem, and Playboi Carti. Qo, Snoop Dogg, and twenty other people involved in the AI Hub community worked on it. 

This album puts into practice what drew Qo and Dogg to AI music in the first place—the ability to create material for artists they wish to hear more of. “If you're not aware, Utopia is an upcoming album that Travis Scott has been teasing for quite some time, but has never been released. A couple of members decided ‘You know what? We should just make Utopia ourselves at this point. We have the technology now.’ It's entirely written and produced by community members, and is being released sometime soon,” AI Hub's Snoop Dogg said. 

“We have a lot of very talented vocalists and producers that have worked on it,” Qo said before the album's release. “The only issue now is that our first single to it was just striked, as it was blowing up on tiktok, so we are unsure of where we will be putting it for streaming. Most likely it will be exclusively YouTube and Soundcloud.”

After the album was released on YouTube on Saturday, it was taken down about three hours later after being flagged for copyright by Warner Music Group. It was also taken down due to copyright infringement on Soundcloud but has since been reuploaded on YouTube by a fan account. 

Advertisement

“It got ~150k plays on SoundCloud and ~17k total on YouTube with 500 people watching the premiere,” Qo said. 

“The way AI is trained feels like a major hurdle for any argument against copyright infringement”

The album had a disclaimer in the description section stating that the video is exempt from copyright laws under the Fair Use doctrine, which states that people are allowed to use copyrighted materials for free for select purposes, including non-profit and educational purposes. Whether or not something is Fair Use is determined based on four factors: the purpose of the use, the nature of the original copyrighted work, the amount of the work used in proportion to its whole, and the effect of the new work on the market it belongs to. 

The Fair Use argument is what many AI music creators are using to defend their work, stating that they are not profiting off of the music and instead, are either parodying the song or making songs for educational purposes. 

“The fan and consumer experience as it relates to music is bigger than the music itself. Fandom is created through experience, concept and the personal relationships that fans have with their favorite artists,” Fowlkes said. “Still, it's important that artists have control over their art.” 

Because AI is so new, Fowlkes said there is still no concrete definition or criteria that have determined what exactly about an AI song infringes copyright.

Advertisement

“There really isn't any precedent that states that someone's vocal tone is copyrightable so the two most obvious legal issues relate to the right of publicity and the ingestion of copyrighted material to create new works,” he added. “The right of publicity extends the legal right to control how your name, image and likeness is commercially exploited by others which can extend to someone's voice. Additionally, although the Drake and The Weeknd replica song didn't explicitly sample any lyrics from their songs, the way the new song was created was by directly ingesting Drake, The Weeknd and Metro Boomin songs to create something that sounds similar to their work. The way AI is trained feels like a major hurdle for any argument against copyright infringement.”

AI is not exactly new to the music industry. In fact, many industry professionals have already been using AI as part of the production process. Pop artist Taryn Southern partnered with the AI music service Amper Music to develop the instrumental parts of her song “Break Free.” There’s also a growing group of music startups that are currently focused on how to automate parts of the music-making process, including mastering a track, writing lyrics, and generating videos for songs. 

Advertisement

Grimes tweeted her support of the technology on Sunday, writing: “I'll split 50% royalties on any successful AI generated song that uses my voice. Same deal as I would with any artist i collab with. Feel free to use my voice without penalty. I have no label and no legal bindings.” 

ak24 said that many labels and industry professionals reached out to him after they saw the beta version of Musicfy in hopes of creating a partnership and gaining access to his app. “The media has this one perception where it's like all these music groups want this to be pretty much shut down. But it's interesting why they want it to get shut down because they want the tech for themselves,” he said. 

The creators of AI music see their work as less of a way to make money or to steal artists’ fame, but to instead simply take their fan appreciation to the next level. 

“I know a lot of people say that this is going to massively change the music industry, but I honestly don't think it'll affect it that much. People are saying ‘Oh, they can make AI Drake and that will affect Drake’ but the truth is that people only care about AI Drake because of what the real Drake has done. There's no appeal in making an entirely new artist with AI,” the AI music artist going by Snoop Dogg said. “On the other hand, labels could possibly try to get in on the demand and try to get artists to sign off the exclusive rights to their own voices. As of right now, the voices of the artists aren't signed to the label, so artists can technically do whatever with their voice in AI. I hope labels don't do this.” 

"There is no magic button to ‘create a beautiful song’”

“Personally, I think songs created with AI should be tagged, but not deleted. They are not harmful, but rather expand the boundaries of creativity,” Wonderson said. “Thousands of people around the world are creating entirely new songs and albums using the voices of their favorite artists, and millions of people are enjoying listening to them. The release of a Travis Scott-inspired AI album will not make his songs any less popular, but rather the opposite.” 

AI music has been accused of accelerating cultural appropriation and racism, largely because some of the most viral AI songs use the voices of black rappers including Kanye West and Drake. In fact, twenty-seven of the thirty-two AI artist models are black artists. These artists speak from their own cultural and racial perspectives, and AI can use their voices to say things that portray them in stereotypical ways. Also, these are already marginalized people within a white-dominated industry, facing the possibility of the further removal of credit, compensation, and other recognition for their art. 

 “This opens an even bigger issue because more times than not, these examples of AI-generated songs on the internet are creating Black music without using the Black people that created it,” Noah A. McGee wrote on The Root. “Non-Black people who are sitting at home behind a computer can do the same thing by creating a song that sounds like it was created by their favorite rapper, but not deal with the consequences of stealing their likeness.” 

“It’s another way for people who are not Black to put on the costume of a Black person—to put their hands up Kanye or Drake and make him a puppet—and that is alarming to me,” Lauren Chanel, a writer on tech and culture, told The New York Times. “This is just another example in a long line of people underestimating what it takes to create the type of art that, historically, Black people make.”

“I'm not really concerned unless it’s something along the lines of saying racial slurs that you aren't necessarily allowed to say through the AI, or trying to something to get an artist in trouble. As the server grew I feel it has become a way for anyone to express creativity if they don't like their [own] voice, or if they are a big fan of an artist,” Qo told Motherboard. “99% of AI covers/original songs are just to experiment with music and pay homage to artists people enjoy. Nothing has been done with any ill-intent to paint an artist in a bad light or appropriate music and we are hopeful that it will remain this way.” 

In the end, Wonderson said, AI is just a tool. Right now, an AI model cannot spit out a #1 hit single, fully formed. 

"There is no magic button to ‘create a beautiful song’ or ‘create a groovy beat.’ It is possible that such a feature will appear in the future, but at the moment it is not available,” Wonderson told Motherboard. “Even if you use the AI to create a record in the style of some artist using a replica of their voice, you still have to write the beat or use a beat written by a human. You also have to write the lyrics, record, and perform the vocal.”