Entertainment

Hacking The Kinect: A Q&A With Robert Hodgin (aka Flight404)

We’ve been following the work of artist and creative coder Robert Hodgin (aka Flight404) for some time now, and in particular, have been impressed by the Kinect projects he’s been churning out over the past few months. Apparently, we weren’t the only ones. Hodgin’s Body Dysmorphia hacks for the Kinect provided the inspiration and technological fodder for WeirdCore‘s latest live visuals at Aphex Twin’s New Year’s Eve performance in Rome.

Though Hodgin is no stranger to depth mapping and code-based art experiments, we were surprised to learn that despite his prolific Kinect projects, he views the device and all the hype around it with a certain degree of skepticism. Perhaps it’s this sort of critical distance from the pitfalls of technological trends that makes Hodgin’s work stand out from the rest, allowing him to push boundaries without getting caught up in the “wow” factor of shiny new tools.

Videos by VICE

We caught up with the developer over iChat to learn more about his work and where he sees this whole crazy 3D thing going.

The Creators Project: Were you waiting for the Kinect to come out? Were you hungry for this technology?
Robert Hodgin:
Oh, not at all… I had no interest in the Kinect before I started messing with it. I don’t really have much faith in it as a video game controller so I was just going to avoid buying it. But then I found out about the Adafruit payoff to the person who releases open-source drivers and I started thinking it might be an amusing thing to mess around with. Once I started to see Kinect projects posted, I thought I should at least see if it was going to be useful for the stuff that I normally do. Within the first half-day, I was hooked. I hadn’t really considered how useful the depth information would be.

So what was that first half-day like? How did you approach the tool? Lots of tinkering and seeing what happens? Or did you have a specific use or experiment in mind?
That is an interesting question. Thinking back, I didn’t have any plan at all. I had messed with depth data in the past, but nothing as accurate as what the Kinect provides. A couple years ago, I was messing around with using depth data to augment a webcam. I used to live in an apartment that had an amazing view of San Francisco. I took that view and created my own rudimentary depth map so that I could add content to the view in real time.

A little while later, I tried the same thing but using a much more confined view. The view out of the office of my old company looked out over a next door roof.
I used Google Maps to get an overhead view and created my own depth map by hand and started dropping virtual objects onto it.

But again, the depth map was drawn by hand, so it was very inaccurate and ended up being more annoying than anything else, so I shelved the project.

Once I had the Kinect, some of these ideas came flooding back, and I fully intend to explore them…But as I started messing around with the data just to see what I could make, I got sidetracked by these body distortion projects I have been posting.

Why is depth mapping so interesting/exciting to you?
It is an added dimension! It should be exciting to everyone.

Now that you’ve played around with the Kinect, what do you think is the device’s greatest creative potential?
I think it is important to note the technology in the Kinect isn’t new. Depth cameras have been around for a while. So I don’t imagine many amazing new things being made because people have been making amazing things with depth cameras for years. It is nice that it is finally affordable and the average programmer can buy one for $150 bucks, download a free open-source driver, and start programming. So the skeptic in me thinks that nothing new and novel will come of it because the depth camera isn’t new.

However… I believe there will be many amazing creative uses. The creative potential of the Kinect is that now artists will play with it instead of just engineers and developers. We are bound to see tons of Minority Report-style interfaces and UI controls. We are going to see tons of [motion capture] projects where you can control virtual avatars. But again, these are not new things. Of the projects I have seen, I think the artistic uses are the ones that stand out.

I think the Kinect has some impressive capabilities, but it remains to be seen if it will stand the test of time, or disappear like the augmented webcam projects have.

But certainly depth mapping will live on beyond the Kinect?
Oh, certainly. Depth mapping will continue to get more accurate and cheaper. And people will continue to make impressive projects with it. I just don’t know if it is a distraction, or something that will forever change how we interact with the world. Maybe in 5 years, something even more useful than depth cameras will come out.

Like what?
Echolocation? Heh, I don’t know. Look back at the Wii remote. Remember when everyone was super excited by the Wii remote? People were making hacks and using it to control virtual cameras and making their own games. And now? Not so much. So as much as I appreciate the technology behind the Kinect, I can’t help but think it will be replaced with the next big thing that few people saw coming.

Thank for your puchase!
You have successfully purchased.