Stanley Jordan student yearbook photo.
Stanley Jordan ’81 in his Nassau Herald yearbook photo.

Stanley Jordan ’81 grew up in Silicon Valley, making circuits as a kid, watching his father become one of the world’s first professional computer programmers. But it wasn’t until Jordan arrived at Princeton that the young musician learned how to fuse his love of music with his fascination with technology.

In Episode 1 of the new season of “Composers & Computers,” we begin our deep dive into the technology-filled life story of Jordan, who went on to a career as an acclaimed jazz musician. We explore how he was initially drawn to Stanford to work with John Chowning, inventor of the Yamaha digital keyboard, but through a twist of fate at the admissions office, found himself headed to Princeton instead.

Chowning himself told Jordan that it was a fortuitous outcome, and Jordan explains why this ended up being true, through meeting two mentors who would have a major effect on his musical path, Milton Babbitt and Paul Lansky.

We’ll look at how he developed his trademark two-hand percussive “touch technique” while he was a student at Princeton. And he’ll talk about his time at the Computer Center and the Engineering Quadrangle, including the time he dropped his punch cards on the floor.

Stanley Jordan in his student days, playing piano, with an electric guitar across his lap. This is a black and white photo.
Stanley Jordan shows his chops on multiple instruments in this 1978 photo by Seth Chandler, courtesy The Daily Princetonian.

List of music played in this episode:

“Asteroids” by Stanley Jordan

“Flying Home” by Stanley Jordan

“What’s Going On,” (Marvin Gaye), recorded by Stanley Jordan

“Two-Part Invention in F Major,” by J.S. Bach, recorded by Wendy Carlos (from the album “Switched-On Bach”)

“Idle Chatter” by Paul Lansky

Transcript:

Stanley Jordan: I was always interested in art and science and blending art and science. I was totally interested in math, and I wanted to find a way to sort of mathematicalize some of my musical ideas because I thought these amazing sounds I was hearing in my head, if I learned more about math, I could find a way to generate those sounds, because conventional music theory doesn’t talk about these waves and flows and the kind of stuff that I was hearing.

Aaron Nathans: That’s Stanley Jordan, Princeton Class of 1981 and one of the most famous musicians to ever have graduated from this institution. You probably know Jordan as a jazz musician and his famed two hand touch technique of playing the electric guitar by tapping his fingers.

He’s best known for his album Magic Touch, which sat atop the Billboard jazz charts for 51 weeks, setting a record. What you probably don’t know about Stanley Jordan is that during Jordan’s time at Princeton, he studied and made computer music. And in the years since, Jordan has stayed closely engaged with not just using technology, but creating it in ways you might have heard, and in ways you’ll hear about first, right here. Stanley Jordan, one of the coolest cats alive, is also, as it turns out, a computer whiz.

From the School of Engineering and Applied Science at Princeton University, this is Composers & Computers, a podcast about the amazing things that can happen when artists and engineers collaborate. I’m Aaron Nathans.

Season Two, Episode One: Stanley Jordan Pulls Out All The Stops.

I had hoped to interview Stanley Jordan for the main podcast, but we weren’t able to connect in time. Fortunately, after Composers & Computers came out, we were able to get something on the calendar. So we’ll spend the bulk of this four-part season focusing on Jordan and his work.

In this first part of our interview, Jordan talks about how the touch technique emanated from the computer music work he was doing while at Princeton. Jordan discusses how a personal setback led to him attending Princeton in the first place. And we’ll discuss how Jordan learned to make computer programs serve his vision of the music. I wish I had some of Jordan’s original computer music from the time to play for you. Sadly, none of the computer music that Jordan made then survives today. But our conversation sparked something in Jordan. It was more than nostalgia. It was a recognition that he had some unfinished business.

While at Princeton, he had written a three-minute computer music piece called “Haydn Seek,” named after the composer Franz Joseph Haydn. But it wasn’t quite finished, and over time it was lost, except for some paper notes, as well as a creative vision that even 42 years later, he still retained in his mind, I was bowled over that a jazz master pretty much wrote a piece of music for this podcast.

You’ll hear it in episode three. But in this episode you’ll hear that his memories are strong and his experience with computers at Princeton still sits at the core of what he does today.

Here’s my conversation with Stanley Jordan.

Aaron Nathans: And I’m talking with Stanley Jordan, whose name I think I probably first saw at Frist on the wall, with the quote, I’ll always have a gig.

Stanley Jordan: It’s funny, I don’t even remember saying that, but it’s fine.

Aaron Nathans: What do you think you meant when you said that?

Stanley Jordan: I really don’t actually remember saying that and I probably shouldn’t be telling you this because then they might take it down.

But no, what I relate to is just that this is what I am, this is what I do. I’m, one way or another, I’m always going to be doing. And Art Blakey told me I should be playing out somewhere every night. I don’t know if quite, quite lived up to that yet, but you’re not going to tell a cat, don’t meow. You do what you mean when you.

Aaron Nathans: When you think of yourself today, do you still think of yourself to a certain extent as a computer musician in any way, shape or form, or even just a musician who likes to wonk out on computers?

Stanley Jordan: Yeah, very much. Although I’d have to say since I started my professional career, I’ve had that sort of in the background. So it’s something that maybe some people know, but I haven’t really presented it forward that much. I’ve used some things.

I’ve generated some sounds on my records, for example, on my magic. I mean, my Cornucopia album, we did what’s going on?

Stylistic sounds, sort of granular synthesis kind of stuff like clouds of sound grains on the intro to that. And I didn’t make a big deal out of it, I just used my software to generate that. So I do a lot of stuff behind the scenes, but there’s definitely going to be more coming up in the next phase. I think that the world of music, when I went professional and went out to the world of music, I think I was maybe conscious of not wanting to confuse people and throw too many things at them. But I think I made my point. So I’m done with the conservative approach and I’m ready to pull out all the stops, which is a saying that a lot of people use. I don’t know how many people actually know what that means. Right. It’s like you’re sitting in an organ, right. And you’re pulling out all the stops. Right. So I think people normally think of that as more kinesthetic. Like, I’m going to get rid of the things that are stopping me, but what you’re actually really saying is that I’m going to make a big noise.

Aaron Nathans: I understand. So what shape will that take going forward? I know you’ve been talking about your new album. Does it have to do with studio stuff, or does it have to do with the live show?

Stanley Jordan: How so? Well, both, really. Okay. So one thing is I built this app.

It does 3D animation of the planets and allows you to fly around it in real time using a game controller. So it’s built on video game technology, and as the planetary motions occur in real time, it generates musical sound based on different aspects of the planetary movements. So, I mean, I’ve always been interested in animation since I was a little kid. My mom had a big library of books, and she let me do animated cartoons in the margins, in the books, and she also had a big record collection. So I had a whole thing where I would make these cartoons that would take place over multiple books, and I put on different musical selections for each one. I was kind of into it. So it’s come full circle. And actually, this app combines my interest in animation and my interest in music, so sort of a multimedia art form. And also there’s other applications for a system like this, not just generating music, different kinds of music, electronic dance music, and so forth, but also it’s useful for education and, I think, for research. Before the pandemic, I started attending theoretical physics conferences, such as the Nordita conference in Stockholm. And I was especially interested in showing the physicists some of my work with sonification, like taking data and rendering that as musical sound, not just as graphic images. I think ever since the Gutenberg revolution and the invention of the movable type printing press, we’ve been sort of enamored with the possibilities of visual information, because suddenly we had this technology that could create and reproduce visual information like never before. And I think that’s had a cultural effect, and it’s caused our visual modality to become predominant. And the auditory has sort of taken a backseat. And one of the things I’m trying to do is rebalance that, because now we’ve got all the equivalent technologies for auditory. I think the invention of digital synthesis and all that really is the icing on the cake. I mean, it really started with recording, but then this ability, I mean, this is what attracted me to computer music to begin with is when I was a kid, I was hearing these cosmic, celestial orchestral sounds in my head, and I had no idea how to realize that. I mean, there were some music that was kind of going in that direction, like certain orchestral things by Stravinsky, Prokofiev, Varese, Bartok. There were certain things that were definitely in that zone.

But this was another thing. And when I first found out about computer music and digital synthesis, I realized, oh, my God, for the first time in human history, we have an instrument that can make any possible sound. I mean, for the composer, this is just absolutely mind blowing.

So one of the great wonderful things about attending Princeton and using the system is it really awakened that. It was something that I knew was possible. I had already been building circuits and kind of moving toward that as a kid. I grew up in Silicon Valley. I don’t think they were quite calling it that yet at the time, but there was a lot of technology going on, and I was part of that whole thing. My dad was actually a pioneer in the computer field because he was programming all the way back in the… He led the team that computerized the Social Security system.

And that was when we lived on the East Coast. And then we moved to the Bay Area in 1964. So we were really kind of in at the beginning of that whole tech revolution and stuff.

Aaron Nathans: How old were you at the time?

Stanley Jordan: I was four, so I turned five that summer when we moved to California. And my father became the first personnel manager at Hewlett Packard because he moved from being a pure programmer to managing teams and then eventually management training. And so he was their first personnel manager, and he hired Roy Clay, who was the engineer who designed the first computer that Hewlett Packard sold. And that was an important computer because it ushered in the beginning of the mini computer revolution. Like, suddenly you could have a computer on your desk. I mean, maybe it would be a big desk or in a relatively small to medium sized company could have, I guess, not quite a desk, but those refrigerator things.

Suddenly you didn’t have to just depend on some big company and timesharing on a mainframe. A company could have their own computer. So that was a revolution. So anyway, my father, being very much technically oriented, was a great source of knowledge. I was very interested in science. He taught me programming when I was really young. So I was actually from the first generation of kids who learned that when you speak a language from an early age, you can speak it without an accent. It’s the same idea. I grew up basically bilingual, speaking human and computer.

Those kinds of things always came really naturally for me. And also I was a real tinkerer. Like, I loved building circuits and especially oscillators and different circuits for processing audio.

I was on my way to building a keyboard guitar synthesizer that was logically laid out like a keyboard. I mean, like a guitar, but physically a keyboard with a matrix of push buttons. And I knew enough about the electronics to build the circuitry, but I didn’t know enough about working with materials, wood and plastics and all that. And so on my way to developing that knowledge, I decided, well, let me see how closely I can approximate the musical possibilities on a normal guitar. Like, my idea was if I had this guitar with the matrix of push buttons, I could play guitar with one hand, I could play independent hands. I could do all this keyboard stuff on guitar. Well, I decided, well, let me see if I can approximate that on a regular guitar. And that’s how I got into the touch technique. I didn’t really think it was going to really amount to much, but it didn’t take long before I started to realize, hey, wait a minute, there’s a lot of possibilities here on a regular guitar. So I put down the electronic fingerboard idea for the time being. And then later on, other people invented the same thing, so I didn’t even have to do it to build it. Now I have instruments like that created by other people, but that was something that I haven’t told very much. That origin of the touch technique, the idea actually was originally for an electronic fingerboard. And I loved just building stuff, building circuits and making cool sounds. I was a big fan of Pink Floyd. I liked their early stuff, their really early stuff. I remember there was a TV special, an hour with Pink Floyd, and this was when they were still sort of more avant garde and experimental.

And the stuff they were doing with the synthesizers was just wonderful. And then a lot of people were doing more mainstream music, incorporating the synthesizer. Switched-On Bach was a game changer, because when I first heard that, it’s like, I really felt like with these electronic instruments, you could actually hear Bach’s counterpoint more clearly than you could on the original instruments.

It was just brilliant, too, the way Carlos integrated that and orchestrated with the Moog synthesizer as an instrument. So that was a big milestone for me, hearing that. And then one day, I was taking a computer music class in high school, and we did a field trip to Stanford to the Center for Computer Research in Music and Acoustics, and we saw who, you know, was the head of that whole operation there. And they were showing us. That’s when I learned about computer music. And I was saying, well, wait a minute, I don’t understand. I talked about flip flops and shift registers and all this kind of stuff. He said, no, that’s not how we’re doing this. We are generating the sound as points of time along the waveform. And it just blew my mind. Like, oh, my God, this is what I’ve been searching for. And so I said, well, what? Because, see, I was always interested in art and science and blending art and science. I was totally interested in math. And I wanted to find a way to sort of mathematicalize some of my musical ideas because I thought these amazing sounds I was hearing in my head, if I learned more about math, I could find a way to generate those sounds, because conventional music theory doesn’t talk about these waves and flows and the kind of stuff that I was hearing.

So I said things like, well, what about, could you take, let’s say, any mathematical function and use that to control something like the volume of a sound or something? And he said, well, yeah, sure, you can do that. The problem with functional control is sometimes you get into issues with the sampling rate. You have to sample it enough times to have smooth waveforms and things like that. I was like, okay, I’m in. This is, like, really awesome.

I went back there one time because there was some sort of a demo, some sort of a public demonstration of computer music. And I went to that, and again, it was professor Chowning, and he was showing some of the things you could do. Like, for example, taking where they had the Vocoder, where you could take, essentially, if you think about the partials of a waveform as rotating vectors, right? So you take the phase information from one sound, the magnitude information from another, you put them together, and you can do things like make a flute talk and things like that. So I heard that, and it was just mind blowing. And they showed the shepard tone, the tone that just goes, and it just keeps on rising forever.

It’s an auditory illusion. Listen as long as you want. It just keeps going up. And I remember that some of the grown-ups were impressed because I figured out how that sound worked. And I said, are you doing this and this and that? And he said, yeah, that’s how we’re doing it. And I remember there was someone who said, brilliant analysis, but, I mean, I was just a kid, but some things you’re just born to do. And for me, that was one of those things. It’s almost like I already knew that kind of stuff and just that know, the integration of the left and right brain, I think, is also a big element of it.

Aaron Nathans: Stanley Jordan was all set to follow John Chowning to Stanford, which was one of the great hubs of computer music in the world. Jordan was local. It would have made a lot of sense. Chowning, as we’ve mentioned throughout the main podcast, is famous for having created the technology behind the Yamaha digital keyboard. But then fate intervened.

Stanley Jordan: So I knew for sure that I wanted to go to Stanford, and I wanted to do computer music at Stanford. Well, I applied to Stanford, and I didn’t get accepted. And that was really devastating news for me, because what am I going to do now? Well, it just so happens that I had also applied to … because my dad said, why don’t you apply someplace on the East Coast, like, I don’t know, like Princeton? And I didn’t know anything about it. I just knew that my father had gone there once for a seminar, because as part of his management training, Hewlett Packard sent him there to take the Kepner Tregoe management seminar. So I knew it existed. That’s all I knew. And to my surprise, whoa. I got accepted. And there was a guy who came to talk with people who had been accepted and basically try to close the deal and get them to go ahead and confirm.

I went to that, and I just said, okay, let me just get rid of this. Let me just ask him the one question. Do they have computer music? He’s going to say no, and then I can leave. So I did. I said, well, do they have computer music there? And he, you know, I don’t particularly believe in that stuff, but, yeah, they do have that. What? Oh, my God. So obviously, it’s just certain things are in the stars. So when I went to Dr. Chowning, and I said, well, I was disappointed that I didn’t get accepted to Stanford, but I have good news. I did get accepted to Princeton. And he said, kind of with a twinkle in his eye, you know, that might actually be even a better situation for you. And I had no idea. I had never heard of Milton Babbitt. I didn’t know about Paul Lansky. There was so much that I didn’t know yet.

But Chowning was right on so many levels.

Wow.

Aaron Nathans: You’re listening to the first episode of season two of Composers & Computers. We’re speaking with Stanley Jordan, the legendary jazz guitarist who received a Bachelor of arts in music from Princeton in 1981. In the second half of this episode, Stanley Jordan will talk about coding music composition programs, and what happened when he dropped a stack of IBM punch cards on the floor.

Jeff Snyder: Hi, this is Jeff Snyder, the director of the Princeton Laptop Orchestra, also known as PLOrk. If you’d like to get a peek at what’s going on in electronic music at Princeton right now, come to our spring show March 28th at 8:00 p.m. in Taplin Auditorium. We’ve got guest artist Ezra Mash doing a live audio controlled light installation. We’re doing a piece that features cross modulated oscillators by Sam Pluta. And we got a piece by Princeton grad student Liam Elliot that uses your own cell phones as its instrument.

We hope to see you there.

Aaron Nathans: One reason why Princeton was a better fit for Stanley Jordan had to do with the computer language and wide use here. At Stanford, they used LISP. LISP, which roughly stands for list processing. It’s a high-level programming language, still in use today for artificial intelligence applications. But at Princeton in the 1970s and early 1980s, they were using a different language, APL, which stands for a programming language. And that’s the language that Jordan prefers. They weren’t using it for music at Princeton, but it was all over campus for other purposes that allowed Jordan to apply his creativity and bend that computer language to his musical will.

Stanley Jordan: When I started at Princeton, I started in ‘77, and the second semester of my freshman year, they offered that course, composition for digital computer. And by then, I had met Paul Lansky. He was one of the first people I sought out because I went to the music department and said, I’m here to do computer music. How do I start? What do you know? And so I met Paul, and he realized, I’m sure right away that this is someone who was born to do this.

And I had to get special permission to take the course because it was listed as a graduate level course, and I was a freshman, but I took the course, and the rest is history, really. And so, basically, okay, back then, this was before MIDI. This was a few years before MIDI.

Aaron Nathans: MIDI is short for musical instrument digital interface. It’s a protocol that lets computers, musical instruments, and other hardware communicate with each other, no matter whether they’re made by the same manufacturer or different ones. It entered the marketplace in 1982.

Stanley Jordan: Even then, there were some advanced systems that had keyboards, but the Princeton system didn’t have any kind of a musical keyboard hooked up to the computer. Everything was done using programming.

Aaron Nathans: The computer languages Jordan was using at the time included some of the software we’ve covered in the main podcast, languages founded by Max Matthews of Bell Labs in the early 1960s and improved upon by Princeton grad students.

Stanley Jordan: You had to go to the computer center and you had the punch cards and the physical punch card reader. And Paul Lansky, he was a virtuoso at the card reader. I mean, he played that thing like an instrument.

And yes, I did have the experience of dropping all my cards on the floor once. And each card represented a note.

And I guess I could have fed them in anyway just to see what happened. It would jumble up my composition. But I’m just not all that aleatoric. It’s like I’ve got the music in my head, and I want to realize the music that’s in my head. I don’t care about some random thing. So I started asking around, like, isn’t there an easier way to do this? And people said, well, there is an easier way. What you could do is if you get on a terminal, then you can create a text file where you create, you type in the text of what’s going to be on the cards, and then you can just submit that as a batch processing, and then you don’t have to deal with the cards anymore. So I started doing that.

Okay, it was a lot easier, but I wanted to do much more high-level things because basically the input to the system was this matrix of numbers.

Every row was a note, and all the columns were the different attributes of that note. The starting time, the duration, what’s the note? Things like that. And so you’re down in the weeds. You’re looking at the individual parameters of each note. And I wanted to be able to do more high-level things, like, I wanted to be able to say, take all the e’s and turn them into e flats and more high level stuff like that. So I started asking around, is there a language that’s geared more for editing matrices of numbers? And someone said, well, APL is good for that. The only thing is that APL has a lot of subtleties. There’s a lot of different ways of doing.

You know, there’s a bit of a learning curve, but that’s the one that you should check out. So I checked it out, and it was just like instant for me. There was no steep learning curve. I just started using it right away.

I remember I saw Howard Strauss, who was, I think, the director of the computer center, and he was at one of the terminals and he was making some cool graphics. And I said, how are you doing that? And he said, oh, I’m using this language called APL. And he was just able to go in and type in some code and then make something happen, change the code, the image changed. And I said, okay, that’s it. That’s the language for me. What he’s doing with those images right now, I want to do that same thing for music.

So I learned APL, and basically what I wrote was a front end to the system. And this was a difference. Like, I really admired Paul Lansky, but his approach was really different. So you know how in those sort of traditional computer music languages, the brilliant idea that Max Matthews came up with is that you have two main parts. You’ve got the orchestra and the score, and the orchestra is programs, and the score is data. So it’s a perfect mapping between music and computer science. And I was much more into the data, I was much more into the score, whereas Paul was much more into the orchestra. So he needed the Fortran because he would write these elaborate programs where the orchestra basically, essentially what the system is, is it’s like a digital equivalent of a synthesizer. You have modules, like in an analog synthesizer, but they’re implemented digitally. So if you need another filter, you don’t have to go buy one. You just add more lines of code. And so that’s what Paul would do, is he’d write these elaborate programs, and he’d create these really complex orchestras of sound. And then all he basically had to do is give it a little data that says, go, and then the whole thing just unfolds. Whereas for me, I wanted to have a simple orchestra, and then I wanted to whip up a whole bunch of data and send that data. So for things like granular synthesis, for example.

Aaron Nathans: Granular synthesis is an audio synthesis technique that involves splitting up sound into fine grains. Paul Lansky’s Idle Chatter pieces use this method.

Stanley Jordan: You know, it turns out, just like light can be thought of as either waves or particles, depending on what experiment you do, right? It’s the same thing with sound. Sound can be either waves or particles.

Everybody knows about the wave theory of sound. Fourier analysis, Fourier synthesis. Even musicians who have no background in science know about Fourier.

Aaron Nathans: You may not recognize the name Fourier analysis, but anyone who spent any time in a music studio or playing around in GarageBand has some familiarity with this concept. It means to break down sound into its various kind of sound waves.

Stanley Jordan: That’s all the wave theory of sound. But the particle theory of sound, that’s where granular synthesis comes from. That’s a lot less well known. And an interesting thing is that the particle theory of sound is actually more suitable for composition. For musical composition, the problem with the wave theory is that in order for Fourier, the way that the mathematics of Fourier actually works, those sounds have to be infinite in time in order for the calculations to come out correct. Because anytime you start or stop a sound, you’re changing the frequency. Even just a pure sine wave, that’s a pure frequency, like we know this from life. If I just take a pure sine wave and all of a sudden I cut off that sound, there’s going to be a click.

And what that click is, is it’s a splattering of all the frequencies. So these different parameters of sound are not completely distinct. If you start or stop a sound, you’re changing the frequency. Even just changing the loudness of a sound actually has a subtle change on the frequency. The thing that makes granular synthesis so cool is that these grains of sound are complete in themselves. Every grain has a beginning and end. And then you take sounds and you build those sounds up to create more complex sounds. And that’s what was interesting to me. So I would have these simple orchestras generating simple waveforms, and then using my APL code, I’d send them thousands of notes. And then I could use sort of a high level stochastic control of those notes so that I could do things like create waves and clouds and the kind of stuff that I had been hearing in my head.

I wrote this one piece called “Haydn Seek,” like Haydn, like the composer. And it was from an assignment in a music class, I believe it was a music theory class with Jim Randall. And one of the assignments was to take an existing piece and figure out what we liked about that piece and then compose our own piece. Doesn’t have to sound the same, but just using what we liked from our model. So I took this piece by Haydn. I’d have to figure out what piece it was. I think it was the opening of a piano sonata.

And I took that, and I started with those notes, and I put them in the computer. And then it grew and grew and grew from there until the end had this granular fanfare with hundreds of notes all coming into the final cadence. And all that stuff was generated with APL. Now, what’s interesting is that part of my system that I made, that was a front end to the music synthesis system.

It had a graphic input. So using the crosshairs on the cursor, I had something like a staff notation, like a grid notation, and I could enter notes graphically on that grid. And I’ve got some screenshots from that. So based on those screenshots, I might be able to reproduce parts of that composition. After college, I lived a sort of itinerant life of a struggling musician, and just a lot of stuff got lost through those years, including all my recordings from the Princeton days.

Aaron Nathans: This has been Composers & Computers, a production of the Princeton University School of Engineering and Applied Science. I’m Aaron Nathans, your host and producer of this podcast.

Thanks to Mirabelle Weinbach for the wave sounds. Thanks to Renata Kapilevich and the Princeton music department and the folks at the Mendel Music Library for their support of this podcast. Graphics are by Ashley Butera, Yoojin Cheong, and Neil Adelantar. Steve Schultz is the director of communications at Princeton Engineering. Thanks also to Scott Lyon. This podcast is available on Apple Podcasts, Spotify, Google, Amazon, and other platforms. Show notes, including a listing of music heard on this episode, sources and an audio recording of this podcast are available at our website, engineering.princeton.edu. If you get a chance, please leave a review. It helps. The views expressed on this podcast do not necessarily reflect those of Princeton University. Our next episode should be in your feed soon. Peace.

Research

  • Art, Architecture, and Multimedia

  • Computing and Network Systems

Related Departments

  • Professor writes on white board while talking with grad student.

    Electrical and Computer Engineering

  • Computer Science

    Computer Science