Advertisement
Tech by VICE

Know Your Language: Csound May Be Ancient, But It's the Audio Hacking Future

Behind the janky-looking code, Csound offers sonic spatialization, live coding, and machine-level music production.

by Michael Byrne
Jul 22 2015, 1:53pm

Image: Pixabay

Csound is a creaky feeling, highly domain-specific programming language adored by an enthusiastic nucleus of musicians and audio programmers. At first glance, it's esoteric as hell, having little to do syntactically with other programming languages, and appearing instead as an ungainly hybrid of markup, assembly language, and C. Understanding why Csound is what it is takes commitment and perhaps even a leap of faith. The payoff, however, is machine-level audio rendering and, arguably, a level of control-slash-extensibility untouched by any digital audio workstation (read: Logic, Pro Tools) and-or audio programming environment (read: MAX/MSP, Pure Data, SuperCollider).

"Csound is a sound renderer," explains Richard Boulanger, a Berklee College of Music professor and one of Csound's leading acolytes, in the textbook Introduction to Sound Design in Csound. "It works by first translating a set of text-based instruments, found in the orchestra file, into a computer data-structure that is machine-resident. Then, it performs these user-defined instruments by interpreting a list of note events and parameter data that the program reads from: a text-based score file, a sequencer-generated MIDI file, a real-time MIDI controller, real-time audio, or a non-MIDI devices such as the ASCII keyboard and mouse."

Image: CSounds.com

One of the most basic appeals of Csound is just that it's text-based. It is a "pure" programming experience. Unlike, say, MAX/MSP, which is a similarly powerful albeit graphical audio programming tool based on data flow, CSound opens itself to the general world of algorithms by allowing the implementation of fundamental programming language paradigms, including basic control structures like loops and conditional (if ... then) statements. And so sound design and composition become natural extensions of bare code.

Csound on its own has no graphical interface—it's composed in a generic text editor and then compiled (or rendered) via the command line. In practice, however, CSound code is more likely to have been created in one of several Csound-specific development environments, including CsoundQT, which now comes prepackaged with CSound installations. The big advantages in using such a tool are the ease in which GUI elements (widgets) can be created, step-through debugging, and a great big "play" button ready to render your project at the click of a button accompanied by a "record" button to save the results to a desired audio format.

This is the "hello, world" of Csound, which, in addition to the usual printed output, features a basic sin-wave test tone. Note the two halves describing first the instrument to be utilized and the information to be performed. It's a bit like object-oriented programming in that the generic form of some structure or set of instructions is created and then instantiated with specific parameters elsewhere. The generic object declares a string of text, while the instantiated object declares that string of text to be "hello, world."

Image: by the author

Csound has a deep history, beginning well before even most general-purpose programming languages you know, which partially accounts for its peculiarities. It's the descendant of a long lineage of languages known as MUSIC-N, all of which are themselves descendants of the original MUSIC. MUSIC itself was created in 1957 at Bell Labs by a 30 year old engineer named Max Mathews and was the earliest program capable of generating digital audio waveforms, for music or any other sound-based application. The very first MUSIC program/composition was a proof-of-concept called "The Silver Scale." A few months later came "The Pitch Variations."

From MUSIC's 1957 birth to the latest Csound release, the MUSIC-N languages all share the same common features: signal-processing or synthesis routines called opcodes (or unit generators) are combined in different ways to create instruments. The basic idea is that these opcodes pass audio or data signals from one to another, with each successive opcode adding some new processing. It's text-based audio patching.

Instruments, which are collections of opcodes working together, are then called or, properly, instantiated in the "score" section of the program. Like this, text gives birth to sound and that sound is digital music. Beyond Csound, many of the ideas Mathews realized via MUSIC and its children—such as array-based storage of waveform data—remain as hardware and software audio processing standards.

Beyond Stereo

A cool thing about the openness of Csound is in its capability for programming not just music, but music environments: spatialization. It's possible to compose music to be performed across many different channels in a really natural way—as natural as it is to craft two-channel/stereo music in a more conventional digital audio workstation. Imagine an orchestra of individual speakers, but well beyond: a room full of speakers, with each one offering some distinct but related sound or sounds. Music becomes a place.

I'll let an example explain what I mean by that:


Live Coding Music

Expect to see a lot more of this in the near future. Music "hackers" performing shows based entirely on a command line interface. It's already much easier than you might imagine to do this on a basic level—as you're reading this, I could write a few lines of code in C to play a simple sin-wave melody. But its potential, the level of low-level control offered by live coding music, could change the entire notion of computer music, or at least the still-pervasive notion of computer music as "pushing a button."

Shoving electrons around a computer in a (more or less) direct fashion is no less literal than moving a bow across a string or hitting a drum or singing a note. I would hope, anyway.

Some excellent thoughts on live coding, what it is, and, just as important, what it means:

Technically, Csound enables live coding in probably the most natural way in the whole of digital music. Csound is, after all, just code.

GETTING STARTED

Csound has a somewhat steep learning curve. It doesn't look like most any other programming language or behave like any other audio production environment. There are a few graphical tools created by outside developers, like blue and Cabbage, but I've tended to avoid these because, dunno, it seems to defeat the purpose of using a language like Csound in the first place. If you're more used to conventional timeline-based digital audio workstations than you are with code, a GUI-based tool might be the way to go.

That said, Csound comes with loads of documentation, from the official and FLOSS manuals to Boulanger's book and several others, it's not hard to find a suitable guide. (There is even a Csound journal and a yearly Csound conference.) CsoundQT, which comes with a huge suite of examples, is a lightweight, free editor that's text-based with some visual elements. It's installed automatically with Csound. The whole package can be found here.

Read more Know Your Language

Tagged:
Tech
Culture
Motherboard
electronic music
programming
code
Experimental Music
C
algorithms
motherboard show
know your language
assembly language
csound