FYI.

This story is over 5 years old.

Entertainment

Duane Pitre Teaches Instruments To Be More Like People

The electronic musician and former professional skateboarder programs software to act like humans, and humans to act like machines.

Photo via Ron Harris at Journorati.

The recent work of New Orleans based electronic musician (and former professional skateboarder) Duane Pitre effectively blurs the lines drawn between man and machine. Take for instance Feel Free, perhaps his best known piece, which is built around a custom software patch that generates an infinitely variable and infinitely long piece of music by imitating the kinds of choices made by live performers. Blurring the distinction further, Feel Free was conceived to be presented in three forms, each of which combines man-made and machine-generated sounds in a slightly different way—an installation where the software patch is set in motion and left alone, a solo performance in which Pitre takes control of the patch and plays along, and an ensemble piece which takes the solo version and adds a group of live performers who have been asked to react to the patch’s randomized output in real time. Of course, none of this wizardry would be remarkable if the music weren’t good, but Pitre’s releases are reliably luminous—each one a spot of thoughtful calm and warmth in an otherwise frenzied world.

Advertisement

Sofware patch used to perform synthesizer and electronics portions in solo and ensemble performances of Feel Free.

Solo perfromance, Bologna, IT at Raum - 11.8.13, photo via Luca Ghedini.

We caught up with Pitre as he finished up a European tour and prepared for his Important Records release of a Feel Free live-ensemble performance recorded at London’s storied Café Oto. We talked programming software to act like humans, asking humans to act like software, technological accidents and surprises, and the freedom of making music with computers.

The Creators Project: How would you describe the way that you use technology in your music?
Duane Pitre:The technology I’ve used in my work—over the last four years or so—is largely computers. I use them for a few major things, one being to create chance-based content generation systems.

Can you explain a little bit about how one of these systems works?
My systems utilize a type of “chance” and “randomness” that is designed in the image of the human performers that I’ve used in my past ensemble works. This automated aspect of the systemmimics the choices that human performers can make when given certain freedoms within the performance of my compositions.

Essentially, the content that the systems are generating is pitch information. Most of my ensemble compositions give the performers a “pool” of pitches they can choose from to use at certain points of time within the piece/performance—for structured improvisation. This pool of pitches changes throughout the piece, maybe from movement to movement, for example. Different instruments get different pitches, and this is usually based on the range of the instrument.

Advertisement

The Core: Software patch that randomly arranges guitar notes for performances of Feel Free.

And how do you use computers to generate this pitch information?
There are a few ways that I deal with the generation of pitch information in my systems. For my solo live works, I’m currently accomplishing this by randomizing the pitch information using a few tools in Ableton Live. There are many tools that can accomplish this kind of process, and each one seems to have its own varying results. One of my recent choices has been to use the combination of an arpeggiator and a MIDI taxation device, which stops every note sent by the arpeggiator from being the same length. It only allows a certain percent of MIDI information through, so if you have the taxation set to 50, it will randomly stop—or mute if you will—50 percent of the notes passing through. I can change this percentage in real time during a performance because I map these parameters to hardware computer controllers full of knobs and such. I look at it as a form of conducting. My systems are somewhat pliant and can be adjusted to my liking in real time via this unconventional conducting, whether that is signals to an ensemble via my hands, or the turning of knobs and such with the computer controllers.

But in the end, MIDI is just the language, and not something I’m stuck on using. If a better way shows itself to me, I’m open to using it and learning the language.

Advertisement

These technological systems seem like they do a lot of work on their own. Do you see them more as a natural extension of yourself or a way to let yourself go?
Both I believe, though I suppose I’ve not thought about it before. This way of organizing content seems very natural to me, intuitive and very satisfying. It’s the way I worked before I even knew what I was doing. For instance, when I improvised with guitar and effect pedals over a decade ago, I had my own rules in my head—ones I didn’t really think about—and I had freedom within those boundaries. I also use the systems as a way to let go, to let things happen as they will.

That said, these events are highly controlled by pre-determined parameters, like in nature. With some of my systems I manipulate them in real-time, changing the parameters that affect the results, like a puppet master, sort of like conducting a composition that utilizes chance, but relies on cues to shape it in that moment.

I look at it sort of like I’ve created a new species of tree. And for each performance, installation, or recording, I plant that seed and let it grow. But instead of letting it grow wild, I prune it, water it a bit, etcetera. Sometimes I think it’s a bit like the art of Bonsai.

W/ Agathe Max - Lyon, FR - 6.23.12, photo via Lauren Cecil.

Does the amount of freedom you give to the technology change depending on the form the piece takes? Feel Free, one of your recent works, takes three different forms: a solo piece, an ensemble piece, and an installation. I imagine the solo piece as giving the technology the least freedom and the installation giving it the most.
I see where you are coming from when you say that, but it isn’t so much that the technology has more freedom in the installation version; it’s just that it plays a bigger role in it. It is one of the aspects of the composition that I love, that the freedom of the technology stays pretty much the same in each form. The technology is the core of the piece—so much so that I started calling the guitar harmonics that are randomized by a Max patch, “The Core.”

Advertisement

“The Core” came first—this is what Feel Free really is, actually. I was working on a very simple Max patch that would randomize the playing of audio files. Once I completed it, I needed some content, or audio files, to insert into it. I’d recently recorded a bunch of individual guitar harmonics by playing the 12th fret “harmonic” on about a dozen cheapo guitars, all of which I had collected and tuned to a just tuning for an older composition called Origin. This Max patch is the essence of the piece, so it needs to be consistent, and it stays the same in each form of the composition, more or less. There is some functionality that needs adjusting for each form of the piece, but this is more so “behind the scenes” things and they don’t really affect what is actually heard.

Do you find that these systems leave enough room for accidents and surprises when you’re performing? Or do you find that once you have figured out more or less how one of your systems will sound, you enjoy in the repetition instead of the change?
My systems always allow for accidents and surprises, no doubt, as this is part of the joy of working with them. The repetition is there too, in the form of the rules and pitch information used. Often people can’t tell that many of the events in my pieces are randomly arranged, and that’s kind of the point. I don’t want that aspect—the “random” one— to jump out at you, I just want it to be a feeling that is intrinsically understood, rather than consciously perceived. Again, there is the analogy of trees. If you see a field of live oak trees, it isn’t repetitively boring. Each one is unique and has its own characteristics—but of course one can see that all the trees are of the same species.

Advertisement

Your work relies a lot on different musical tuning systems, which are usually pretty laborious to change. How has working with computers enabled you to expand your work in this regard?
Many ways over the years. Initially it allowed me to carry out sine-tone experiments and simply learn to hear what Just Intonation sounded like. Later down the line, it allowed me to co-create the Max patch that I use in Feel Free, but which is not just for Feel Free. The patch is totally tunable, in any tuning system you’d like. It takes text files that you create, each file being a different tuning, and pulls those into a drop down menu within the patch. You can then choose from all the tunings that you’ve created, and you can do this on the fly. Changing tunings is very easy this way and the result is that I have way more time to actually make music. And I also re-tune my analog synths—ones that do not have built in tuning capabilities—in Just Intonation by sending them data from my computer.

How did you begin using computers in your music? You’ve mentioned in interviews that it was due to necessity—having no access to ensembles that could play your music—but it also seems to be a natural conceptual progression from your earlier work. Did it feel that way?
The progression of the path I’ve taken in music has felt quite natural to me, though I’d imagine this is usually the case with others. I started using computers for two main reasons: to better facilitate my work with Just Intonation and to free myself, in large, from being reliant on ensembles to perform my work—usually with me being involved. The latter is something I very much love doing, and there’s really no replacement for having a group perform a work, but it is very difficult to pull off on a regular basis, in my situation at least. When I lived in NYC it was possible—that’s where I started working with ensembles and chance-based systems—but it was still a lot of work. But when I moved to New Orleans I knew that it would be next to impossible to pull off such performances with no budget to do so. So I looked for a different way to work with multiple “voices” and my systems. Computers seemed to be a logical way to do this, and I was already using them in NYC with some early electronic experiments. But as my solo work has evolved over the last few years, I’m no longer trying to always fill the “ensemble void.” That isn’t always my train of thought when using computers at this point.

Advertisement

Do you prefer this way of working? Or do you prefer using ensembles?
This changes, I go back and forth. I never know when I’ll favor the other. I’ve been focusing on solo works for a couple of years now and I’m very much still into going down that path. A mixture would ideal, if the funding to deal with ensembles was there. So I suppose I just try to average out my feelings toward them both.

Can you hear a difference between the music that an ensemble produces and the music produced by synths/samples? I’ve tried listening to similar snippets of different recordings of Feel Free, one right after the other, and there are times I’d be hard-pressed to identify which one was made by which method.
Absolutely, and there is no replacing a real ensemble. With Feel Free, I recorded the guitar harmonic samples and that is the only element that is computer arranged in the solo and ensemble versions. The rest of the instruments are performed by humans—even my synth, via a controller. The performers are instructed to pay attention to the computer-arranged guitar harmonics, which informs how we will react, though only at times. I found this to be a great way of becoming one with “The Core,” to simulate its choices and style, at times.

When I’m using synths and samples, it is a large priority to make the resultant work sound as organic as possible. Once I finished my most recent album, Bridges, I sent it to a friend who went to Mills College for the electronic music and recording MFA program. He thought it was an actual ensemble playing together in the same room, recording live—it’s not. Hearing this from my friend made me smile quite big. I used technology to facilitate this result, one I was quite pleased with, and I did so out of necessity, because doing it the live ensemble way wasn’t logistically or financially possible.

Advertisement

Rehearsal at St. Giles-in-the-Fields Church - London, UK - 10.25.08, photo via Lauren Cecil.

It seems like necessity has helped you find a fertile middle ground between “live” performances and “laptop” performances, one where the distinction has become much less important than it once was. Is this fading distinction between man and machine in your work something you’ve thought about?
Yes, it’s made me quite uneasy, many times over. I started playing music in various experimental-minded bands back in the day, playing electronic bass and electric guitar. So, having that tangible interaction with an instrument is important to me and how I learned. But my path has led me to using computers and synths, and that tactile experience is somewhat lost. I’ve had to find ways of making my live performances more interesting—though I hesitate to use that word—seeing that I’m just pushing buttons and turning knobs, essentially. One way I’ve done this is via immersive, live visuals. But I’m still searching for a middle ground, a marriage of computers/electronics with the human, physical experience of performing.

Check out Duane Pitre live below:

4'33'' Café presents Duane Pitre at Base Elements Art Gallery. Barcelona Nov/06/2013 from 4'33" Café.