How Neon Indian Uses a Kinect to Sync Their Live Show Visuals with the Band

We’re just all piles of data waiting to be used in the end.

|
Nov 30 2015, 2:30pm

Neon Indian performing at Webster Hall in New York on October 14, 2015. Photo: Clinton Nguyen

Though a lot of tall white 20-somethings were bumping up against me at Neon Indian's show at Webster Hall in October (the first leg of his album release tour), there was something entrancing about seeing Alan Palomo, the lead singer, and his digitally warped double on screen. This performance was self-generative, his body movements fed into heavily digitized simulacra.

Palomo had rigged Microsoft's motion-capture camera Kinect to the stage to capture his and his bandmates' bodies as they careened across the stage. They appeared on screen as body blobs—all polygons, jittery static lines, noisy torsos—and Palomo performed before a live feed of himself, punched into all sorts of shapes by the device in front of him. This was a digital performance, where the data fed in from his body movements made the performance what it was.

While the thesis of Palomo's newest record, "VEGA INTL. Night Club," is about the human honesty you can find in the depths of nightclubs, bars, bedrooms, the visuals in his performance painted an altogether different picture. Instead of seeing the album's human themes re-interpreted up on stage, I saw electronic bodies.


The Kinect hasn't seen widespread use as a live performance tool. Nine Inch Nails held some shows with one in 2013 and I can recall two music videos that used them, one by the Maccabees and the other by the Midnight Juggernauts. Usually, the Kinect conjures hokey images of people playing fake tennis with their TV, so musicians might be reluctant to make their performances into a tech demo. Thankfully, the experience was far better than that. But I still had to ask him why.


Motherboard: Why did you choose to use a Kinect for your visuals?
Alan Palomo: We've always tried to incorporate visuals that tie in specifically to what we're performing. There has to be some symbiotic relationship, otherwise it's like plopping in a DVD and then just having the audience watch TV in the middle of your set, which isn't really interesting to me.

We worked with our friend Lars in Texas who has a modular video synth company called LZX Industries and the cool thing about that was it was all compatible with audio synthesizers. So we could always see the information or just track audio lines into that and have it manipulate all of those images.

And that was the sort of first permutations of Neon Indian. I was traveling around with a modded Atari Video Music, which was kind of like this failed toy they made in the 70s. It was intended to be this sort of WinAMP style visualizer that just spits out 8-bit vomit to whatever you're playing.

Ultimately the one thing we hadn't been able to incorporate was the actual physicality of the performance—us as performers. [With Kinect] this raw information gets fed into an algorithm, and is spit out on screen. And there's that symbiotic relationship happening.

It's cool when you can see the data that adds some meaningful connection to what you're experiencing. It's one thing to be like "oh check out this cool thing I made using some topographical data from some space." But what happens when that topographical data is you? You develop a little more of an emotional connection to it.

We're just all piles of data waiting to be used in the end. This sounds familiar.
Yeah. My friend Johnny and I were commissioned by the MoCCA Museum to make a short and I had this idea for this fictional program called Outer Osmo Ghost Mode. The idea was basically within the confines of some Facebook-Second Life hybrid, where you're exploring the internet in three-dimensional map spaces. You create an avatar for yourself and the more that you use it, the more that it's algorithmically aggregating all these things, like how you use it, what sites you frequent, what you like to talk about, what you like to purchase—kind of like the way that Google is creepily spitting banners back at you in Gmail, being like "hey we know you like Criterion movies, they're having a flash sale" and you're just like "get the fuck away from me."


And the setup for this film was, after some indeterminate amount of time, these two avatars have not been logged into, and they're put into ghost mode. So they've just been stuck in the confines of the home map, and are having all these conversations with each other that's really just regurgitated chat speech from previous things it's said and suggested.

So I kinda wanted to make this weird little data-type combo where they're spouting statistics and like, chat speech and trying to get a real conversation out of it, like, well what happened to the users? What year is it? How far into the future is it now? But I wanted it to be this big reveal towards the end where the humans are long dead and everyone's dead, and this is just a satellite that's still floating somewhere, powering this once colossal netscape that people utilized for commerce.

Like if you could take everything I've ever said and emailed and every photo of me, cut enough angles and enough time and growth and context and place and connections and have all of those things play into some personality and you synthesize it into something that talks to you...it'd probably be a lot like me, you know?

You have to fight that thing to the death to prove you're still yourself.

So this is as close as you're getting to the human-machine divide?
Technically, people who wear contact lenses are like, cyborgs. That's a piece of wearable technology and I feel we're not too far off from like wearable computing either. A couple years back I went to the Global Future 2045 conference, the one that's put on by that dude, Dmitry Itskov. He's this eccentric Russian billionaire that's totally afraid to die and wants to funnel his entire fortune into finding some practical means of achieving cybernetic immortality by 2045.

Would you be into that sort of idea of a machine-human symbiosis? Would you buy into a cybernetic future?
I'm so down with human and computer integration that—I would probably reach a point in my life where I got to be in my 60s and I'll be like "I had a pretty good run as a human." I would be so ready to forgo my human flesh for an android body. If I weren't bound by any environmental limitations of my human body, I could colonize Mars tomorrow. That'd be fucking awesome.

This interview has been lightly edited and condensed.