This story is over 5 years old.


Become A Digital Archaeologist With Clement Valla's "Surface Survey" Exhibition

The artist behind "Postcards From Google Earth," returns as an online archaeologist with a digital pickaxe.
March 31, 2014, 9:30pm

If you've been following algorithmic, glitched-out art within the last several years, chances are you've come across the work of Clement Valla. His Postcards From Google Earth, with its images of fluid, warped concrete and organic landscapes colliding, was a viral sensation. Valla's latest project, Surface Survey, finds the artist excavating hidden images and digital junk, and sculpting them into 3D prints.


The seed of the idea goes back to Valla's 2013 exhibit, Iconoclashes, a collaboration with Erik Berglin. The two took images from the Metropolitan Museum of Art's public web archive and, using Adobe's Photomerge to “digitally stitch” and blend images tagged with words like “God” and “religion,” created, as Valla writes, “chimeric deities, hybrid talismans, and surreal stellae, gods and statues.” Last year, Valla also launched 3D-maps-minus-3D, a site where users can navigate a Google Maps-like Earth completely flattened and exploded over two dimensions.

Building on the foundations of these projects, Valla debuts Surface Survey on April 19th at Transfer Gallery. As with Iconoclashes, Valla mined images from the Met, but also from the Smithsonian and the Autodesk 123D Catch public website. These images are texture maps: photos used to build 3D images, but not meant for human eyes. They are the skeleton of the system. Valla hopes that the 3D-printed sculptures, based on these texture maps, will refresh the increasingly commonplace world of 3D-printing. The artist talked to The Creators Project about his physical-meets-digital exhibition.

The Creators Project: When did you start working on Surface Survey?

Clement Valla: My last show, Iconoclashes, which was a collaboration with Erik Berglin, used a lot of imagery from the Metropolitan Museum of Art's website. I invited some people I knew at the Met down to the show. I was worried they'd be upset, but they really loved it. After that, the Met's Media Lab invited me up and asked if I wanted to do another project, and they they showed me different things they'd done.


One of the things they'd done was made 3D models of some of the objects in their collection. So, I poked around in the files, and found all of these images, and that is sort of when Surface Survey began. That was may last June. Some time after that, I sent five or six of the Met images to Cloaque, a Tumblr page.

How did you collect the images for Surface Survey?

For the upcoming show, there's a combination of things: I'm showing eight objects from the Met that I modeled, then there is something that I call the “surface midden.” A midden is a Scottish term for “shell heap” [domestic waste], that now has come to mean a junk pile. I love junk piles because you learn a lot about culture from them. It's the day-to-day junk, where you can find out what they eat and things like that.

The software that I use to make the 3D models runs on iPhones, and it's a web-based software. By default, any time you make a model, if you don't uncheck an option, your model gets published online. So, what I've been doing is going online and searching this gallery of public models, and searching for “first try” or “test.” I found all this junk of something like a person's 3D model of a banana on their desk. This is called “Objects of the Midden.”

The third part of the show are 3D models from the Smithsonian. You can go right online, open your browser, and navigate them. That's another version of texture maps from objects of high cultural value, but with different algorithm arranged in the texture map. The Met objects and “Objects of the Midden” run on one algorithm, and the Smithsonian models run on another.


What software are you using?

It's free software put out by Autodesk called 123D Catch. It can make a 3D model from a bunch of photos. A texture map is almost like a Cubist collage—it extracts fragments from all the photos. Because of Autodesk and free software, there are so many users, and thousands and thousands of 3D models that are automatically published on the website, which I've been mining or excavating.

I basically call them intermediary images, or images not meant for human consumption. They are texture maps produced by a particular software, and are then used by the software to make 3D models. Typically, humans aren't meant to see them. They're behind the scenes and hidden from the human eye. I'm making them slightly useless, stripping them of their function and turning into them an aesthetic consideration.

They're like buried objects, hidden from view until excavated. 

Yes. In a weird way it's a parallel I make to archaeology, where we have no idea of the context or function of a lot these objects at the Met. We consider them aesthetically, but then try to piece together what the context might have been for their production and distribution, or how they functioned in a particular social context.

You also talk about “meaning making.” What do you mean by that exactly, within the context of Surface Survey?

It's very easy for me to read meaning into these images, and think about why they were assembled in a particular way, and read stories into them. But, really, they weren't made for that purpose at all. They were just made by a computer to produce a 3D model. Typically, in an art context, the meaning is half supplied by the artist or author, and the other half is written by the reader. This interests me because computers can't read or express meaning the way humans do.


So for this project, the viewer is paramount. 

In this project, all the meaning comes from the viewer. I don't directly produce the images, I just find them and show them. It's kind of similar to how people found meaning in horse_ebooks, especially when they thought it was a bot. When it was revealed that it wasn't a bot, people were really disappointed. I just feel that very soon computer algorithms and bots will potentially be producing more images and texts than humans will, for spam, ads, or whatever it is.

It will be an interesting semiotic conundrum as far as where meaning comes into that whole operation. It's hard to find the language with which to address these things. What is an image becoming when it's just sort of spit out by this mostly automated apparatus? Can we still think of it as we have traditionally thought-out images?

How will the exhibit be presented in Transfer Gallery?

There will be 3D prints of actual texture maps of varying sizes, but all scaled so that the fragments in the texture maps are more or less the same scales as the original objects. The smallest is 16x16 inches, and the biggest is 62x62 inches. It will be a huge range. And then there will be 3D sculptures at the show, because I'm extracting 3D fragments.

There will also be archaeologist tables with what look like plaster objects. A lot of the show is going to be about these 3D shapes without color, and then all of this color that mimics the 3D shapes but doesn't have it. The objects don't come through unscathed.

What's interesting about the sculptures is that not only do you have a degraded simulation of reality with the texture maps, but then you have a second layer of degraded simulation with the 3D-printed sculpture. Copies of copies, in other words—none of them as “real” as the source. 

Exactly. I rip off the texture maps, so they're these white 3D-printed things that sort of have that strange sense of unreality because they're so pristine, and their edges are all digital looking. So, yeah, there are these layers of simulation, or of removal.

Surface Survey debuts at Transfer Gallery in Brooklyn on April 19. For more info, check out the exhibition webpage