This story is over 5 years old

What Happens When a Microchip Inside Your Brain Dictates What You Listen To

In a new piece for WSJ, Stephen Witt imagines the AI-driven future of personalized listening.

Dec 9 2015, 8:18pm

In a new piece for the Wall Street Journal, Stephen Witt—author of this year's How Music Got Free—dishes out some speculative fiction on what music consumption might look like in 2040. In just 25 years, he imagines that it will be commonplace for an "algorithmic DJ [to open] a blended set of songs, incorporating information about your location, your recent activities and your historical preferences," updating in real time with biofeedback.


At this point in time, "Even the concept of a 'song' is starting to blur," he writes. "Instead there are hooks, choruses, catchphrases and beats—a palette of musical elements that are mixed and matched on the fly by the computer, with occasional human assistance." If this sounds interesting, we encourage you to check out the work of innovators like The League of Automatic Music Composers and contemporary boundary-pushers like TCF.

Witt also imagines the possibility to "digitally resurrect" "long-dead voices from the past," giving artists like Etta James and Frank Sinatra newly-composed hits. This kind of vocal synthesis technology eerily reminds us of computer-generated pop star Hitsune Miku.

To read more, check out the whole piece here.

Follow Alexander on Twitter.


Thump, Artificial Intelligence, THE FUTURE, algorithmic dj

like this
AI Inventing Its Own Culture, Passing It On to Humans, Sociologists Find
Facebook AI Researchers Built a ‘Fashion Map’ With Your Social Media Photos
The AI That Draws What You Type Is Very Racist, Shocking No One
Are You The Asshole? New AI Mimics Infamous Advice Subreddit
The Promise and Terror of Artificial Intelligence
The People in Intimate Relationships With AI Chatbots
Google Is Teaching AI to Explain Your Jokes to You
There Are More than 9,000 Undiscovered Tree Species on Earth, Study Says