FYI.

This story is over 5 years old.

Entertainment

"Debugging" Explores Artificial Ecosystems With 35,000 GIFs

An in-depth and glitchy look at one of the video art pieces from this week's Moving Image fair.

Earlier this week we previewed some artists you shouldn't miss at this week's Moving Image art fair in New York. Now, we're getting into the nitty-gritty pixels withmultimedia artists Lisa Gwilliam and Ray Sweeten, who are showing their piece Debugging at the fest. Though part of the project originally debuted at Microscope Gallery, the installation this week is another in the pair's line of internet and “data-derived projects,” which includes Stuxnet, Idols, and other series that subvert internet technology for artistic ends.

Advertisement

With Debugging, Gwilliam and Sweeten explore the central metaphor of system maintenance. The footage they shot for the projectis of scuba divers maintaining an artificial ecosystem. But, the architecture (the code) behind their own system—which runs on Javascript and WebSockets—needs its own type of pruning. The latter tech debuted at Microscope, but now the whole project is premiering at Moving Image.

The Creators Project caught up with Sweeten and Gwilliam at their DataSpaceTime studio in BedStuy, Brooklyn, where they treated me to a screening of Debugging. There, they explained the software at the heart of the system, which they describe as a musical instrument that could be used in any number of experimental ways.

Debugging - excerpt from DataSpaceTime on Vimeo.

So, what's at the heart of the Debugging's hardware and software-wise?

Ray Sweeten: We've got two browser windows paired left and right on separate screens, then a third browser window on a third screen below. The big screens are 60 inches, while the third is 23 inches. They're all unified and linked together over WebSockets, which is a new protocol.

With HTTP, you send a request to the server, and it sends you something back and closes the connection. The WebSockets server is like a juiced-up HTTP request but it stays up indefinitely like a chat room. There's no polling, no continual request to see if something's happened—it's just immediate. It's like an open line or conference call, and it's initiated with Javascript. In fact, it's all Javascript on the back end. And, each individual GIF has a coordinate number.

Advertisement

We end up building really large systems to do things. They're really involved.

Why is it set up in this fashion with three screens?

Lisa Gwilliam: Basically, the point of it was to give people a chance to see what's going on aside from the surface image. Just to get an idea of how it's working, and not have that hidden from the viewer.

Do various GIFs rotate through the coordinates?

Sweeten: It cycles through sets of GIFs.

Gwilliam: We're up to a total of 35,000 GIFs.

Sweeten: We shot video, and then we wrote a shell script to break short video clips into animated GIFs.

How does it do that?

Gwilliam: First, it breaks them down into stills so each frame is separated in the video. And then each of those frames is cut into grids, and each of the squares on each of the grids in each position is put together as an animated GIF. It's a three-step automated process. It sets this stuff aside for us in a folder so that we have all of the GIFs that we need to run each section of the piece. It's mostly about organizing what the raw material is once you've produced it; then it's about editing it together.

Sweeten: Yeah, we sequence it together with Javascript, using a long list of commands and parameters for each screen. When one screen is done, it cues the partner screen to start receiving GIFs, then it's also going to queue to start up the next one.

Did you shoot the footage with the knowledge that you'd do this exactly?

Advertisement

Gwilliam: This was shot when we just wanted to shoot a lot of stuff to see what to do with it. The movement in this video, whether it's a pan or continuous movement, works really well in the process. So, it was a good bet that the footage would translate.

Where did you shoot the footage?

Sweeten: We shot through a window at an aquarium.

Gwilliam: It's an unusually state-of-the-art aquarium in Monterrey, California. It's mainly situated in the bay so that the aquarium is half on land and half working with the actual ocean. Water comes in and out of the aquarium from the ocean. We were shooting in an area where there is very detailed approximation of an ecosystem. It's really amazing.

But, even if it appears to be a very self-contained, self-sufficient ecosystem, in come the guys to tweak things and make sure things are going right. So, it hits home that it is controlled. It's a weird duality that we're using as a metaphor for the protocol in Debugging.

Right, the maintenance of systems of spaces, whether virtual, ecological, or otherwise.

Gwilliam: In a general sense, everything that we've done with this particular series of browser-based, animated GIF work requires a lot of choices, and we've had to visit some imagery that we've long had interest in. But, we try to find imagery that speaks to us also as a reflection of what we do to build the piece, and how it maintains its functions. I think that's a big theme for us.

Advertisement

Sweeten: There are a lot of formal considerations on how we build the system to do what it does, and that effects the entire process. We don't have a visual idea, but more of an idea of building an architecture that allows us to systematize a way of messing with images.

Gwilliam: But, then we use it to kind of generate the work. We know that it's our medium in a way, and we try it out on different things if we want. It's kind of like an instrument in that way.

On the partnered screens, GIFs appear, flicker, move around, etc., and then disappear. Is this process random or is there an algorithm to it?

Sweeten: It's both.It's scripted in the sense that it's cycling through a finite set of image GIFs. It's random in the way that they appear on screen.

Gwilliam: You're not seeing the same piece each time you watch it. It's numerically impossible to see it the same way. At the same time, it's irrelevant because that line of what you can perceive is beyond our ability. And that's the line we're usually exploring: how much can we change, distort, break up, and pixelate images, while allowing you to still actually tell what it is.

On a similar note, you write of Debugging that it comments on the act of looking with its various degrees of awareness. Can you elaborate on that idea?

Sweeten: Well, just the act of looking in terms of how much information you can actually take in. I always think of the Transformers movie when it came out. One of my friends said: “I really can't take in all of that information. There's too much visual information, and I can't take any more in. Technology lets you do all of this stuff to pack more information into an image, but my brain doesn't care—it just starts throwing stuff away.”

With this, I think a part of what we're doing in terms of this idea of the act of looking is that even though everything is broken up and gridded out, you do see remnants of an image because they're all being reassembled in the order that they were when they were broken apart. You can reconstruct the image in your mind. But, we're interested in what does your brain start to throw away in the process of trying to reassemble something. We also want to make some of the stuff that you don't see, that's hidden in the code, visible in some way, or at least conceptually in your mind.