Paul Lansky

Paul Lansky is the most celebrated and musically influential of the computer musicians at Princeton, and it isn’t only because he was famously sampled by Radiohead on their classic album “Kid A.”

His work expanded the boundaries of computer music and speech synthesis into territory far from the art’s musically difficult twelve-tone beginnings. In the words of current Princeton Music Professor Dan Trueman, “He invites you to listen however you want… It’s this place you go and your find your own way.” Or as his former student Frances White said, Lansky was able to bring “computer music into a much more open and beautiful place.”

Frances White outdoor snapshot
Frances White

This episode is a celebration of the life’s work of Paul Lansky, as well his collaboration with a Princeton engineer, Ken Steiglitz, that made much of that work possible. We’ll hear a wide sweep of his computer music from throughout his multifaceted career. And we’ll look at Lansky’s work building software, as well as the similar efforts of fellow composer Barry Vercoe, whose CSound technology left a lasting imprint on software musicians still use today.

Lansky is the William Shubael Conant professor of music, emeritus. Stieglitz is the Eugene Higgins Professor of Computer Science, emeritus.

 

List of music by Paul Lansky played in this episode:

“mild und leise”

“Notjustmoreidlechatter”

“Artifice (on Ferdinand’s Reflection)”

“Six Fantasies on a Poem by Thomas Campion”

“Pine Ridge”

“Idle Chatter”

“Just More Idle Chatter”

“Table’s Clear”

“The Sound of Two Hands”

“Shadows”

“The Things She Carried”

 

List of other historical music played in this episode:

 

“Idioteque,” Radiohead

“Speech Songs,” Charles Dodge

“Synthesism,” Barry Vercoe

“Walk Through Resonant Landscape #2,” Frances White

“Contraption,” Alicyn Warren

 

Sources (other than interviews):

Reflections on Spent Time,” Paul Lansky, keynote address to the International Computer Music Conference, August 18, 2009.

Interview – Barry Vercoe,” RNZ, July 5, 2013

Liner notes, “Computer Generations.” (Barry Vercoe)

 

 

Transcript:

<<MUSIC>>

Aaron Nathans:

You’re listening to the first song Paul Lansky created and completed on a computer. It’s called “mild und leise,” which takes its name from a German opera. Lansky will be the first to tell you this early piece is in his own words, “an immature work”. But it’s a really interesting piece. And now it’s also incredibly famous. You might have heard it, albeit in a very different context.

Aaron Nathans:

Lansky realized the piece in 1973, at the Princeton University Computer Center at 87 Prospect Street, using the IBM 360 91 mainframe computer. Lansky would say later that he sweated bullets over every note. He entered the piece into a contest sponsored by the International Society for Contemporary Music and Columbia Records. And he was one of the winners. It ended up on an album, which was found in a record store bin, more than 25 years later by Johnny Greenwood of Radiohead, who was a fan of old electronic music. Greenwood used a short clip from the song to create a pastiche tape for his band mate, Thom Yorke, to write a song against. Here’s Radiohead’s “Idioteque,” which samples Paul Lansky’s “mild und leise.” <<MUSIC>>

Aaron Nathans:

But there’s a lot more to the story, and the artistry, of Paul Lansky. As with previous computer music advances at Princeton, his advances came with some important help from an engineer. <<THEME MUSIC>> From the School of Engineering and Applied Science at Princeton University. This is “Composers and Computers,” a podcast about the amazing things that can happen when artists and engineers collaborate. I’m Aaron Nathans. Part four, Idle Chatter.

Aaron Nathans:

Paul Lansky was born in 1944. He was named by his parents for the legendary singer and political activist, Paul Robeson, whose birthplace and childhood home was in Princeton. So you might say that from the beginning, Lansky was destined for a musical career in Princeton. He was born to a politically progressive mother and a recording engineer father. He grew up in the south Bronx and attended the High School for Music and Art in Manhattan. He got his undergraduate degree at Queens College where he studied composition in French horn.

Aaron Nathans:

He came to Princeton in 1966 as a graduate student and was immediately taken with the new IBM computer in the EQuad. His teachers included Milton Babbitt, one of the leaders of the 12-tone serialist music movement, which was all the rage among classical musicians at the time. Lansky would say in a speech more than 40 years later that these were exciting days. Quote, “We felt we were on the forefront of a real revolution.” He said in that speech, “Perhaps I’m just remembering the excitement of being 22 and coming into a new high-powered environment. But as I look back, I’m certain that something unusual was going on. Princeton was a happening place.” Here’s Paul Lansky himself from our recent conversation at his home.

Paul Lansky:

It was, I was enamored of 12 tone theory and I was studying with Milton Babbitt at the time. And he would listen to my tapes and I don’t think he had anything to say. I worked on a piece in 1969, I think, and it was sort of a 12-tone piece. And it used things called combinatorial tetrachords. That’s four-note chords that you can transpose three times. So you make all 12 notes. And I worked on it very, very diligently for about a year and a half. And finally I listened to it one day and I said, this just sounds awful. And so I gave up and I didn’t come back to the computer until 1973.

Aaron Nathans:

During that four-year period, he focused more on music that used traditional acoustic instruments. But in doing so, he found himself frustrated by their limitations. And he made the same observation about the IBM at the Princeton Computer Center that Babbitt did about his machine of choice, the RCA synthesizer. The machine could be more responsive to a composer’s will than a human could ever be.

Paul Lansky:

I had written a string quartet in for my master’s, basically… Maybe it wasn’t. I don’t remember, it was late seventies… Late sixties. I wrote a string quartet, and it was played by a group, I think it was called the American Quartet. And they played it in Carnegie Recital Hall. And I thought it was terrible. It was too long. It was about 45 minutes long. And it was kind of boring. And I discovered the composer’s perp walk where you after the concert, you notice everyone tries to walk away from you to avoid having to say anything. So it was not a great experience. And the thing I loved about working on the computer is that I actually worked with the sound, with the physical sound itself. And I couldn’t do that with the string quartet. So I didn’t have a string quartet. I really liked to work with the physical object of the sound. And so working with the computer was a way to do that.

Aaron Nathans:

And so he created the aforementioned piece, “mild und leise,” the one later sampled by Radiohead. Which he realized in the Computer Center using what was then the most recent version of Max Matthews’ music sound synthesis program. It was called Music 360, and it was designed by a postdoc from New Zealand, Barry Vercoe, whom will discuss later in this episode.

Aaron Nathans:

In our previous episode, we talked about Godfrey Winham, the man who did as much as anyone to promote computer music at Princeton. He built composing and sound synthesis software. He taught classes, he created a digital-to-analog converter so the computer musicians could hear their music without having to leave campus. And toward the end of his short life, Winham was working with engineering professor Ken Steiglitz on the next frontier of computer music, synthesizing the human voice, taking spoken recordings and manipulating them to sound octaves higher or lower.

Aaron Nathans:

It was amazing work, but he never finished it. Winham died during treatments for Hodgkin’s disease in 1975, at the age of 40. Winham’s passing was a blow to the community of computer musicians at Princeton. Those who had gathered around tables at the Computer Center, as they waited for their musical works to be processed. They were a close-knit bunch and now their unofficial leader was gone. Winham was more than a composer who happened to be a wizard with the computer. He was also a great friend to many of them, and a teacher as well during Winham’s last days, Lansky visited him at his house, bringing over records that Winham could listen to as he lay in bed. Lansky had seen Winham and Steiglitz work up close, and he had taken a class on computer music with Winham. Now Lansky was ready to pick up where Winham left off and work with Steiglitz on research into synthesizing the human voice.

Aaron Nathans:

Lansky said he can still, in his mind’s ear, hear Winham synthesized voice saying “This song was sung by an IBM 360 model 91.” Lansky was excited by the work of another composer who was roaming the halls at the Engineering Quadrangle during this period. When Godfrey Winham was ill, he sent composer Charles Dodge, a box of cards with the software he had created for something called linear predictive coding. Dodge, a former instructor at Princeton was by now on the faculty at Columbia, but he still visited Princeton frequently to use the computer equipment. He created a piece called “Speech Songs” that Lansky found inspiring. <<MUSIC>>

Aaron Nathans:

Linear predictive coding was a technology nurtured at Bell Labs among other places in the late 1960s, in order to send the human voice over the telephone wires in digital form, using much less data. Today, linear predictive coding is used in cell phone technology, but in the early days it was an art and science practice by precious few. There had been some early attempts that computer speech synthesis, namely Max Matthews’ “Bicycle Built for Two.” But when it came to generating sound or music with computers, speech synthesis was the great electronic sound frontier of the late 1960s. Lansky would first use linear predictive coding, or LPC, on his 1976 computer piece “Artifice (On Ferdinand’s Reflection).” <<MUSIC>>

Aaron Nathans:

You might recognize linear predictive coding technology from the early 1980s toy, Speak and Spell.

Speak and Spell machine:

You are correct. Let’s spell “lest.” F. Wrong, try again.

Aaron Nathans:

Linear predictive coding breaks each sound into speck-sized samples. 40,000 per second. Here’s Ken Stieglitz, a Princeton emeritus professor of computer science. He’ll tell us about how linear predictive coding reproduces the human voice.

Ken Stieglitz:

And the way the human voice is produced is usually modeled as having two main parts, the source and a filter. The source can be the… Often the vocal cords. If I’m saying something that’s a voiced… I’d say a voiced vowel like ah, E, O. I’ve got a… An oscillator. I’ve got a… An input generator, much like the input generators you would use in Music 4B. But the source can also be noise, a turbulent noise in a vocal tract. If I want to utter a sibilant, -sss like or -sh. So there’s two kinds of sources, roughly speaking and mixtures of the two. And then that’s all filtered by passing the generated wave through the vocal tract, which includes the mouth shaped by the cheeks and the tongue. Some of the sound goes through the nose. And so it’s very complicated, constantly moving piece of equipment that we carry with us. And it’s kind of a miracle that we learn how to manipulate it, to make the sounds that we do.

Aaron Nathans:

Lansky took that approach to another level on his next piece.

Paul Lansky:

And then in ’78, I wrote what I considered to me, my first real piece, “Six Fantasies on a Poem by Thomas Campion,” which used my wife’s voice. And I just left her in a room to record this poem. It’s a beautiful poem. And it celebrates what’s called quantitative verse, where the rhythm of the poem is made out of the vowel sounds rather than the consonants. And after I finished that, I was convinced that this was what I wanted to do. So the piece is still played. You can hear it on Spotify. <<MUSIC>>

Paul Lansky:

There was… It was an exciting time. I remember picking up my son at nursery school and I said, Jonah, we have to go to the D/A converter. And so I take him down there.

Aaron Nathans:

In the eQuad

Paul Lansky:

In the eQuad. This was in ’78, ’79 And Jonah would… I was working on the Campion Fantasies. And Jonah would hide under the desk as his mother’s voice came out, transmogrified over the speakers.

Aaron Nathans:

Here’s Dan Trueman, a present-day professor of music at Princeton.

Dan Trueman:

So I mean, the piece that people talk about from the seventies with Paul is this “Six Fantasies on a Poem by Thomas Campion.” And I… That wasn’t my first experience with Paul’s piece. It was… Paul’s music, it came, it was probably my second or third. But it’s an extraordinarily… I mean, I just think it’s a gorgeous piece, just the humanity of that piece, in terms of the sense of getting into somebody’s voice and the quality of language.

Dan Trueman:

And just how that comes, comes to life in this totally new way, using this technique that he and Ken really worked on together, the linear predictive coding. That was just not… I don’t think there was any, anything close to that at the time. I mean, it was just… From when my reading of the history, again, I wasn’t listening to this music at the time. I was a kid at the time, but when I sort of retroactively go through that history, this just stands out. It’s just like a shooting star of this incredible vision and sound and musicality that’s coming through in that.

Paul Lansky:

And that’s how I did the Campion. So I recorded Hannah’s voice, and then analyzed it using linear predictive coding. So I ended up with a set of filters that was her voice. And as a result, I could slow the speech down without changing the pitch. And I could change the pitch without changing the speed. A marvelous experience with Ken was… One day we were both at the converter and I asked him if he could change a woman’s… He changed the vowel formants of a woman’s voice to a man’s voice. So you could change… Or you could change a man’s voice to a woman’s voice by raising the vowel form because female and male speech have different vowel formants, different resonance… Points of resonance in the spectrum that are significant for speech. And Ken figured out a way to do that. We wrote an article together, it’s published in the Computer Music Journal.

Ken Stieglitz:

It’s a way of essentially then taking the speech apart, building a model for it, and then playing it back… Putting it back together. When you put it back together, when you re-synthesize the speech, you have the option of playing with all the parts. If you play with the source, you can change the pitch. So if I re-synthesize “Mary Had a Little Lamb,” there’s a certain trajectory of the pitch during that sentence.

Ken Stieglitz:

If you think about it, we’re constantly adjusting the pitch of how… Of the speech that we make. Questions end in pitch going up and so on. So we really are constantly unconsciously, without giving much thought at all, constantly adjusting the pitch of the way we… So one could impose one’s will on that pitch. So what would people call pitch contour? One can impose one’s will on that pitch and put it in notes, and that should produce singing.

Ken Stieglitz:

And Paul asked, well, suppose I wanted to change the instrument, say from a smaller instrument to a larger instrument, violin would be an example. Well, why not make a viola and a cello and a bass and make a string quartet? Why not? So the question is, how do you change the filter to reflect the fact that the size of the instrument is changing. That maybe the resonances are all analogous, but it’s just bigger or smaller. And that turns out to be an interesting problem. And it’s a problem in how to fool around with digital filters. And that’s something I… That I dream about.

Aaron Nathans:

So Lansky, with the help of Steiglitz, synthesized a small string ensemble and wrote a piece for it called “Pine Ridge,” in which he took a violin and transformed its sound into that of a Viola, a double bass, a violin, and a cello. <<MUSIC>>

Ken Stieglitz:

I think it’s a very nice example of science and art, sort of colliding and producing something that would be impossible otherwise.

Aaron Nathans:

Here’s Jeff Snyder, a senior lecturer in music at Princeton.

Jeff Snyder:

So he made an interesting piece called “Idle Chatter,” that he did a few different versions of. Where it’s sort of generating these kind of nonsense babbling from this LPC speech synthesis. That was one of the… It’s a classic piece of electronic music that people know really well.

Paul Lansky:

The thing that I used a lot in, starting in the eighties, was random numbers. The thing I noticed about “mild und leise,” is that it became old very quickly, that if you listen to it too many times, it becomes so predictable because you’re hearing exactly the same thing. That the waveforms are very simple and predictable.

Paul Lansky:

And it’s one of the thrills of hearing live performance is the suspicion that the pianist may die at any moment. And with computer music, electronic music, you have no such fear. The worst thing that can happen is there’s a power failure. So I got interested in using random number techniques in “Idle Chatter,” which was ’84. I used a technique called random selection without replacement. So it’s… You have a bunch of objects in a hat and you draw one out and put it aside. And you keep doing that until the hat is empty. Then you take all the objects, you put them back in the hat and mix them up again. So there’s no way to predict what you’re going to get. It’s not a… You don’t get familiar patterns.

Aaron Nathans:

Like all his other pieces Lansky created that piece at the Computer Center, which had been at 87 Prospect Street since 1969. By then the mainframe was an IBM 3084, which according to IBM records maxed out at 64 megabytes. By way of comparison, the phone in my pocket right now has 2000 times more memory. Moore’s Law, indeed. But that was the last piece he created at a centralized campus computer center.

Aaron Nathans:

Something was about to change. That would be a positive development for the music department, but would also separate the composers from their fellow students from other disciplines. By the mid 1980s, the Music Department had received a state grant to purchase its own computers. The school received two DEC Microvax II computers. It used a grant from the National Endowment for the Arts to repurpose one of them into a digital-to-analog converter, just like Godfrey Winham had done 15 years earlier with the Hewlett Packard. Suddenly there was no need for the composers to take their works into the EQuad so they could hear them. The Winham Lab, which had shown it was possible for musicians to convert their music on a small computer, had effectively made itself defunct. It would close by 1988. Now working from the Woolworth Building on Princeton’s campus, Lansky used the Microvax II to create his follow up piece, “Just More Idle Chatter.”

Aaron Nathans:

By the time the idle chatter pieces started appearing, something else had changed. Paul Lansky was not quite satisfied with the existing music composition software. So in true Princeton fashion, he built that software himself, which he’s used to make some of his most innovative music yet. That’s what we’ll talk about after the break. <<THEME MUSIC>>

Aaron Nathans:

If you’re enjoying this podcast, you might want to check out our other podcast, which also deals with technology. “Cookies: Tech Security & Privacy,” deals with the many ways technology finds its way into our lives in ways we notice, and in ways we might not. If you’re looking to shore up the security of your personal data and communication, you’ll find some great tips from some of the best informed people in the business. You can find “Cookies” in your favorite podcast app or at our website: engineering.princeton.edu. That’s engineering.princeton.edu. We’re halfway through the fourth of five episodes of this podcast, “Composers & Computers.” On our next episode, we’ll step into the present day. We’ll see how the work of people we’ve profiled in this series have changed how digital music is made today. We’ll take a look at the innovative Princeton Laptop Orchestra, and we’ll look at other ways artists and engineers are collaborating at Princeton today. But let’s not get ahead of ourselves. Here’s the second half of part four of “Composers and Computers.”

Aaron Nathans:

In 1973, when Paul Lansky created his most famous piece, “mild und leise,” the one sampled by Radiohead. He realized it in the Princeton computer center on the machine they had at the time, the IBM 360 91 mainframe computer. The software he used to create it was called Music 360.

Aaron Nathans:

Remember from our previous episodes that the composers in the computer center used various generations of composition software initially created by Max Matthews from Bell Labs. This software all had the name music in its prefix, or as they call it in the computer world, it was the “Music N” series.

Aaron Nathans:

The first of these used at Princeton was Music three, then Music four, and then Godfrey Winham and Hubert Howe created a more user friendly version for composers called Music 4B. When the machine switched to the Fortran language, they created Music 4BF. People at other institutions created their own version of Max Matthews’ software. But the computers kept getting switched out for newer, more powerful models. And every time that happened, the software needed to be updated as well.

Aaron Nathans:

When the new Computer Center moved from the Engineering Quadrangle to 87 Prospect Street in 1969, a new machine was installed, an IBM 360 91. One of the composers took it upon himself to update Max Matthews’ music composition software for that machine. His name was Barry Vercoe. He was a Princeton postdoc in music from New Zealand. And he was on campus from 1968 to 1970, studying with Godfrey Winham. Here’s Barry Vercoe from his home in New Zealand, talking about Godfrey Winham.

Barry Vercoe:

Well, he was someone that I really looked up to. He was a musician and a good mathematician. And I had a lot of respect for Godfrey. And that’s what drew me to Princeton. So I ended up spending a couple of summers at Princeton, and then later on put in more time there to be with Godfrey actually.

Aaron Nathans:

Here’s a piece he created at the time in the computer center, “Synthesism.” <<MUSIC>>

Aaron Nathans:

He created Music 360 at Princeton, it was a prime example of a composer personally creating the tools he needed to realize his musical vision. Vercoe realized he couldn’t shape the phrases the way he wanted to. Musicians using traditional acoustic instruments are able to crescendo, or peak, in the middle of a note and Music 4B wouldn’t allow that, nor would it allow control over how long a note took to decay. Music 360 addressed that problem for a while. Music 360 was the most popular music synthesis program in the world. In a 2013 interview, Vercoe described himself, quote, “Basically as an artist, just solving problems as the need arose.” At MIT, he built a music technology program at the school’s Media Lab, and they worked on digital-to-analog converters there. After he left Princeton, he continued to try and perfect his version of Max Matthews software.

Aaron Nathans:

When the Digital Equipment Corporation created its more powerful computer, the PDP 11/34, complete with a graphical display, Vercoe rewrote the composition software, calling it Music 11. But that computer came and went too. Fast forward to 1986, when the C programming language was well established, Vercoe decided he wanted to create a version of the software that could work on a variety of machines. So he rewrote it, calling it CSound, a popular open-source composition software that remains in widespread use even today. The program still uses some of the same technology, the unit generators, and score that Godfrey Winham and Hubert Howe built for Music 4B at Princeton in the mid-1960s. Here’s Hubert Howe speaking about CSound and Barry Vercoe.

Hubert Howe:

Well, I think it’s laid the groundwork for what, a lot of what came later and which is still being used and will be used for forever, I think. I mean, I think the work at Princeton, which ultimately led to this CSound program by Barry Vercoe, that’s probably the most important legacy of that early work at Princeton. Not just the programming that he did, but also the fact that he put that out there as open source and that’s… This has impacted thousands of people. I mean, the number of… I think there’s been contributions made to CSound from people on every continent, except Antarctica. That’s saying something. It’s really made an impact.

Barry Vercoe:

Of course other people have developed some of their own versions, or Paul Lansky. I’m quite attracted to stuff that Paul has done. Yeah. I’m… I admire some of the musical work he’s done. But he wasn’t afraid of jumping into the technology. So that, I mean, we had an overlap there.

Aaron Nathans:

Like Vercoe, Paul Lansky found himself at a similar crossroads where the existing technology didn’t allow him to produce the sounds he was hearing in his mind. Like Vercoe, he wrote his own program, and also like Vercoe, he would update it several times, including translating it into the popular computer language C. Unlike Vercoe, who made his career largely on the technology side, Lansky continued to be a prolific composer and was a regular customer of the technology he would create. Here’s Ge Wang, associate professor at the Computer Music Research Center at Stanford University, and who got his Ph.D. from Princeton in computer science in 2008.

Ge Wang:

I think there… I get the sense Paul made CMix to really fit a kind of way of thinking that he wanted, in writing music.

Paul Lansky:

I got dissatisfied with these canned programs for creating sounds. So I wrote my own program.

Aaron Nathans:

For instance, Lansky didn’t like it how, once you synthesized a piece of music, if there was a wrong note, it needed to be synthesized again, which was expensive.

Paul Lansky:

I came from a tradition of performance. I was a French horn player. And my analogy was it’s like, if you miss a note, you don’t play the whole movement again, just to get the note right? You rehearsed that section. And so I discovered that with a computer, it was very easy to undo a note. You just synthesize the same thing with the minus amplitude, and it would subtract that note out of the mix. So you’d have a job that spent $4,000 in computer time and you could correct one note just by un-synthesizing it and then re-synthesizing it… I wrote Mix, which was basically mixing program on the computer. And you could generate sound in the middle. But you could basically also just deal with the existing sounds that you had, and undo things and redo things. So it was much more like a good rehearsal than a jam session.

Aaron Nathans:

Lansky wrote the original 20-track Mix program to run on the IBM mainframe in the Computer Center on Prospect Street in 1978. It was originally written in the computer language Fortran, but he translated it into the language C, and added sound synthesis capabilities to run on a PDP 11/34, in roughly 1982. He moved his program, now dubbed Cmix, onto the Music Department’s MicroVax computers in 1985. Here’s Frances White, who was a Ph.D. music student who worked with Lansky in the late 1980s and early 1990s. Even today, she still uses both programs, Lansky’s Cmix and Vercoe’s CSound, in her own composition work. She described them as different, but complementary.

Frances White:

Cmix is… Just on its surface, it’s very easy to work with. And it’s very easy to put things together kind of very quickly. I think it’s actually… I mean, Paul designed it, of course for his own work. But I think it’s actually a pretty intuitive way of working. For CSound, there’s… You have the concept of an orchestra and a score, which is not… In Cmix, you don’t really think of it that way. In C Sound, so… And I guess that’s another reason why to me, the idea of instrument building seems very CSoundy, because you do. You create this orchestra and it has a bunch of filters or oscillators, or you bring in some speech sounds or whatever. So you design this instrument and then you create a score that instrument’s going to play.

Frances White:

And the score will have typically notes and rhythms, but it will also have whatever wave form you want your oscillator to reference or whatever. So… Yeah, there’s… And that’s kind of a very traditional musical way of thinking, because as composers that’s kind of how we’re brought up that you have instruments and they play a score. Really having the two of them is very nice, because you can sort of put your brain in different places. How you want to think of it, I mean, in a way, I guess if I had to generalize, I would say CSound is more traditional to me, in a way. And CMix is a little more… Allows you or encourages you to be a little more experimental. I mean that’s not entirely fair, but that’s a little the way I think of it.

Aaron Nathans:

Experimental perhaps because the music Lansky was making at the time was moving closer to a style of music called musique concrète, a French term for utilizing the sounds of the outside world as part of music. You could use field recordings, or as Lansky did in his 1992 work, “Tables Clear,” you could work with your kids to make sounds in the kitchen.

Aaron Nathans:

Dan Trueman.

Dan Trueman:

He invites you to listen however you want. He doesn’t come in… And this is actually using some of his kind of language. He used to talk about Beethoven. Beethoven would come grab you by the collar and say, “Listen to me, and listen to me this way.” And Paul’s music is… It’s this place and you go and find your own way. In it, starting, I think, less… Not so much the Six Fantasies, but starting in the eighties with these, these pieces like “Idle Chatter” and “Homebrew.” Where he just created these gorgeous places that where they had a certain kind of gentleness to them. Where you didn’t feel like somebody was beating you over the head. Instead, somebody was creating this, this environment and to… And just to be clear, it’s a kind of environment that was impossible before computers. Impossible before Paul kind of invented the techniques for making those sound worlds. <<MUSIC>>

Dan Trueman:

But I mean, some of these aspects of his music are pervasive. Like the rhythmic qualities of his music. We hear in a lot of music around these days. And some of that has to do with musical minimalism, being more pervasive than it, than it used to be. But also computers being able to make rhythms very easily. That was something that Paul did… Made. He made that. And I remember some of the tools that he made with CMix and RTCMix, which were tools that I used back in the early nineties. Where you could like write little lines of code to generate these rhythmic spaces that you hear in “Tables Clear” and “Idle Chatter” and stuff like that. That just didn’t exist before.

Aaron Nathans:

Lansky has said that most of his work from the mid 1970s onward aims to create a virtual space within the loudspeakers, with sounds that have the illusion of having a physical source containing motion and energy. So he says, it’s not exactly musique concrete because, quote, “I want to create the illusion that someone is back there, banging, blowing, or beating something recognizable.” Ge Wang.

Ge Wang:

The music that we love, we could love, for example, conceptually. So a lot of computer music has strong conceptual components, strong technical components, a lot of meaning that we can kind of ascribe to a piece of music that we were like, we really love this for the idea of it. And then there’s music that we just maybe just like. Which is… Does… It… You just simply like it. It’s music you would listen to because there’s something that’s hard to put into words about it, or you just simply like it and you would just listen to it for the sheer joy of listening to it.

Ge Wang:

And I think for… When I first heard “Tables Clear,” up to that point, I turned a lot of other computer music that I think I conceptually really appreciated, But I think “Tables Clear” was the first piece of computer music that I liked. And then eventually there are reasons I found why I love this piece, and why I love every piece on that album. And… But I think this piece will always be special to me because it was the first piece of computer music that I just simply liked.

Frances White:

And Paul was able to bring… I think computer music into a much more open and beautiful place. And kind of… He was… And I say shamelessly beautiful because back in the day, writing something that was tonal and beautiful and emotional, there was a time when that was really not looked well upon. And Paul really… I think he really sort of just broke down that door for computer music and brought it into this space where it could be this kind of beautiful often very tonal space.

Aaron Nathans:

Here’s Rebecca Fiebrink, a professor at the University of the Arts London, and who received her Ph.D. in computer science from Princeton in 2011, and spent time on the faculty here.

Rebecca Fiebrink:

Personally, my experience of Paul and his music was… Number one just… I don’t remember when it was that I got his first CD, but listening to it and going, wow, that’s really cool. And again I think computer music entails these connotations for many people, including for me of a few decades ago of its bleeps and blips. And it’s very mechanical and there’s not a human element to it. And it’s maybe not expressive in the same way. And Paul’s music is none of that. It’s so rich and human and expressive. And it grabs onto you.

Aaron Nathans:

Frances White.

Frances White:

He was a fantastic teacher and he was also a real supporter and a real advocate. And again, I have to stress that in those days for a young woman, it was not easy. And Princeton was not an easy place. And having his support that was just so meaningful to me. And musically, he had a really wonderful way of… He could always critique your music without making you feel criticized. And that’s a very special kind of ability as a teacher that he had. And he also… He was just like a great role model for doing computer music. He was sort of somebody to aspire to.

Aaron Nathans:

Here’s Frances White’s piece, “Walk Through Resonant Landscape, Number Two,” which she created in the early 1990s at Princeton, using both CSound and Cmix, and uses real world sounds from the woods recorded on cassette. <<MUSIC>>

Aaron Nathans:

White said that despite the composers no longer needing to go to the centralized computer center, the two MicroVax computers in the Woolworth music building, affectionately called “Winnie” and “Maggie,” still fostered a community of composers. That’s because everyone was using the same digital-to-analog converter. And so everyone heard your work as well as your mistakes. But around 1989, things changed again. That’s when the school brought in a batch of NeXT brand computers, which was a big change. They were individual workstations. And for the first time, the computer that you wrote your music on had an onboard digital-to-analog converter. You could listen to the music you were creating on headphones. That’s how we know computers to be today. And it’s no surprise that the founder of NeXT computers with Steve Jobs, the late CEO of Apple. Alicyn Warren, she spells Alicyn with a Y, teaches computer music at Indiana University.

Aaron Nathans:

She got connected to the field while a student at Columbia in the early 1980s. She studied there with renowned computer music composer, Alice Shields, as well as Mark Zuckerman, who by this point had moved on to Columbia. She remembers sitting in the music library at Dodge Hall, putting on a copy of “mild und leise” on the turntable, and listening to its entrancing first minute and a half over and over again.

Aaron Nathans:

After seeing Lansky speak at Columbia, she enrolled at Princeton and was one of Milton Babbitt’s last students before his retirement in 1984. After some time away from electronic music, she entered the MIDI studio in 1987. There she found a new digital world, including lots of the latest computer music technology, including the Yamaha DX7, with technology developed at Stanford. Here’s Warren’s “Contraption,” created at Princeton and released in 1990. Listen for the 12-tone influences. <<MUSIC>>

Aaron Nathans:

In her dissertation essay at Princeton, Warren argued that Lansky’s work achieved through sound alone, something that had previously been only noticed on stage and screen. Quote, “The impression of evesdropping on another world, a different time and place.” Paul Lansky remained a prolific creative composer for years to come. Check out this brief clip from his hourlong computer opera, 1997’s, “The Things She Carried,” again, featuring his wife, Hannah MacKay. <<MUSIC>>

Aaron Nathans:

Meanwhile, Lansky’s software continued to evolve. RTCMix is another generation of the composition program, whose creators include Brad Garton who received his doctorate in music from Princeton in 1989, as well as Dave Topper. RT stands for real time. This version, which built on Lansky’s work was born in 1995. After 45 years on the Princeton faculty, including nine as music department chair, Lansky retired in 2014. He lives with Parkinson’s disease, which has slowed his ability to make music. But his analytical mind is sharp and his memories are clear. At a 2019 tribute concert for Lansky, which coincided with his 75th birthday, the performers were all players of traditional acoustic instruments. That reflected the fact that Lansky himself had returned to his musical roots by that point. His most recent musical works have included percussion, chamber music, and orchestral pieces.

Aaron Nathans:

In his 2009 keynote speech to the International Computer music conference, he said, quote, “It’s in my nature to take control and metaphorically design the cars I drive, which led me to write CMix, RT, and a few other software tools that I used heavily for many years. This added a lot of time to the compositional process, but the fact remains that for about 40 years, I spent 90 percent of my composing energy working with computers, produced a large body of work of which I’m proud. And then well into my sixties, found myself leaving this exciting arena for other pastures.” “The real genius of the computer,” he said in that speech, lies in its ability, quote, “To intervene and operate on many different levels and in many different ways, rather than using the computer to demonstrate technological music.” He said, “It should be used like any other instrument, in whatever way is musically appropriate, or perhaps not at all.”

Paul Lansky:

I think it’s good that things don’t require technology to justify themselves. I think music should be… The music should be good enough to make the difference. I stopped when I found myself writing the same piece around 2005. And I found myself re-inventing the wheel that I had already invented. And I… Another reason I stopped was because notation programs got good enough to make them feel like you were working at a music desk since around the early 2000s. I’ve had 50 pieces published by… Carl Fischer and I’ve got a… I just issued my… I’ll give you a copy. I just issued my 17th CD on Bridge Records.

Aaron Nathans:

That’s great.

Paul Lansky:

So the past year or so it’s been slower, partially because of the pandemic, but also because my Parkinson’s.

Aaron Nathans:

Right. Are you able to do any music now?

Paul Lansky:

I try, but it’s very frustrating.

Aaron Nathans:

Must be. Does the computer help at all?

Paul Lansky:

It makes things worse. (chuckles) Let me get you a copy of my new CD.

Aaron Nathans:

Okay. Thank you. I appreciate it.

Paul Lansky:

It was a pleasure.

Aaron Nathans:

In our next and final episode, we’ll look at the Princetonians moving computer music into the future, as well as those seeking new ways to marry engineering and the arts. This has been “Composers & Computers,” a production of the Princeton University School of Engineering and Applied Science. <<THEME MUSIC>>

Aaron Nathans:

I’m Aaron Nathans, your host and producer of this podcast. I conducted all the interviews. Our podcast assistant is Mirabelle Weinbach. Thanks to Dan Kearns for helping us out with audio engineering. Thanks to Dan Gallagher and the folks at the Mendel Music Library for collecting music for this podcast. Graphics are by Ashley Butera. Steve Schultz is the director of communications at Princeton Engineering. Thanks to Scott Lyon, and a big thanks to Paul Lansky for inviting me over to talk about his life and career.

Aaron Nathans:

This podcast is available on iTunes, Spotify, Google podcasts, Stitcher, and other platforms. Show notes, including a listing of music heard on this episode, sources, and an audio recording of this podcast are available at our website, engineering.princeton.edu. If you get a chance, please leave a review, it helps.

Aaron Nathans:

The views expressed on this podcast do not necessarily reflect those at Princeton University. Our next episode should be in your feed soon. Peace.

 

Related Departments

  • Computer Science

    Computer Science

  • Professor writes on white board while talking with grad student.

    Electrical and Computer Engineering