Interview: Aaron Oppenheim On Laptop Based Improvisation: “Those sort of glitchy, clicking kind of error sounds”

Index

  1. Improvising with a laptop
  2. Advantages of sample based music
  3. Interacting with other musicians
  4. About Aaron’s Patch
  5. The San Francisco Bay Area’s experimental electronic music scene
  6. Electronic music and science
  7. Links

Improvising with a laptop computer

 

Diego: What do you think that makes laptop based improvisation different from improvisation with acoustic or electric instruments or even synthesizers?

Aaron: I think the big difference with laptop improvisation is that with a computer system you can create any sound, it is the least limited sound palette. You can create any sound by pushing a button as long as you have it available to you. Obviously with an acoustic instrument you are limited to what the instrument can do, there are extended techniques, which extends the sound palette but is a fairly limited thing. A violin sounds like a violin, unless if you tap it you get different sounds but on the whole it sounds like a violin. There are regular synthesizers which also much more limited in the sounds they can create, there is a very wide variety, among modular synthesizers, but unless you are using sampled sounds it is fairly limited. It is more like a regular instrument and you tend to have more automatically at your fingers that you can do right away. Whereas laptop can do anything basically but that is a problem I found, the lack of limitations makes it difficult to find a starting point. The fact that you can do anything can make it hard to decide what to do, if you have an unlimited number of options in front of you you have to pare those down otherwise you will never be able to make a decision. Most laptop improvisers make and program their own instrument.So laptop improvisation implies that, starting from an unlimited palette, you have to figure out what kind of thing you want to do. If you make your own program you sort of start from scratch and you can build up there.

 

Aaron1So how I started doing it was, when I was at Mills [College] taking improvisation classes, I would come in with an idea of what I wanted to sound like and what I wanted to be able to do, and then tried to play with people. At the beginning, I would want to do something and not be able to, for example if I wanted to be able to make a really quick percussive sound. So I would go home that night and add that capability to my instrument. I kept modifying the instrument until I got to a point where I did not feel like I kept having to add things to it.

 

Diego: What kind of limits did you settle with?

Aaron: I have always been interested in having the computer sound like a computer, it can sound like anything. I could be playing violin samples or something or I could be playing synthesizer sounds or anything I wanted to. But there are some sounds that you cannot get except from a digital system, those sort of glitchy, clicking kind of error sounds. That was a really important part of my aesthetic from the beginning, I was interested in exploring the sounds that only a computer could make. There are a lot of clicks and glitches and sounds that usually you are trying to avoid because you do not want something to sound like a computer, but I do want it to sound like a computer because otherwise what is the point of using a computer? I embrace that. The instrument I made mostly uses samples through granular synthesis, which is a technique that takes a sound and kind of chops it up into tiny little pieces and reassembles after that. Granulation is usually used to stretch out sound or layer things or slow things down without changing the pitch and a lot of things like that. The goal a lot of the time is to have this smooth sound where it is slowed down, the pitch is not changed and it just feels like it is stretched and extended. With traditional granulation you are not supposed to be able to hear the tiny little slices individually.  But essentially what I made was a very bad granulator where you can hear the tiny little slices, it is not as smooth sounding, and it is there where the artifacts come out, the clicking and error sounds.

 

 

Diego: How do you produce that effect, the clicking and the artifacts, is it the length of the sample?

Aaron: A regular granular synthesis sample is 30 – 100 milliseconds long, so very short. With mine the samples are a little longer so you can hear the individual ones. Also with traditional granular synthesis you want to have an envelope on the sounds so you have a 30 second sample but it fades in and fades out smoothly each one of them, whereas mine has harsh cuts, I think that is kind of the main thing mine ends up doing or how I get the artifacts. The glitchiness is the harsh cuts that come out of it, even if I have very short samples I can hear the beginning and the end of them very clearly as opposed to the smooth sort of in and out fading kind of effect

 

Diego: How do you keep real time control on the sound?

Aaron: That was one of the pretty difficult things that took a long time. I do not use any MIDI controllers or anything. Early at Mills I made that decision inspired by Ikue Mori’s setup. She visited in my first or second week at Mills, and talked about her stuff. She does not use controllers because it is just simpler and more portable. If you are carrying around a MIDI keyboard every time you play a gig, it is another thing to take with you, it is another thing to break, you can lose it or you just forget to bring it or something like that. What do you do? You cannot play. So many people who do live computer music their systems crash, computers crash, something is not working right, things like that. The more peripherals you add the more likely you will have a technical issue. So I just decided to not have any peripherals and just use the laptop. My controls are all the computer keyboard.

Real time control was really just a matter of sort of trial and error. Hitting a key will make a sound for the most part or it will turn on some process or something like that. I had to make it so that at least there were some things that did not require any setup, I just hit a key and something happens immediately.

The things that ended up being a bigger problem was having to stop a sound really quickly. The ability to change really quickly is something that is really important to free improvisation. The most effective moments to me are when you have four people playing together and they are all free improvising but when you can get that moment where everything changes in a second without planning it, without someone giving a downbeat, without anything, just by listening and intuition, that is such a beautiful thing when it happens. Early on I would have these processes running and sound coming out of my system, something would happen and I would want to change it really quickly but I could not so I would hit stop and it would trail off and still have another five seconds of sound. I realized that was a big problem so I had to get rid of that and make sure I can hit a key and change things, kill it or whatever right away. It was really just making sure it was responsive to quickly making sounds and stopping making sound which sounds like a basic idea but it was not obvious to begin with

 

Diego: Yeah it took a while to figure out what was happening. I imagine that the learning curve for participating in improvisation was a little bit steeper for you because you had to go back to programming.

Aaron: Yeah exactly. It took a while; the other thing about laptop improvisation is that you are building your instrument at the same time as you are learning to play it. When I was starting out I would add some functionality but then I would have to learn to play that in a musical way. Not only was my system not very well thought out, well programmed or finished, I also was not good at playing it at the beginning. It took a while to develop some ability and remember how to do things and have everything available without having to think about it, that was another thing. It was important to make it easy to start to make a sound so that I would not have to think about it, if you have to think about it before making sound then you lose the intuition, really quick responses and the kind of dialogue you can have when you have this immediate ability to do something. It was also a matter of just practice and playing and learning the instrument that I made and its capabilities.

 

Diego: Did it happen to you that made your program more complex than it was necessary?

Aaron: Yeah there are definitely things that I added and then I took away but I do not remember what. On the whole, I think I tried to not do that too much and just sort of added stuff when I felt like I needed it. A lot of the time if there was a sound I wanted to make, I would try to do it without programming anything. If I wanted a specific sound, I did not have anything in my patch built in that could play a C on demand just because that is not what I was interested in, but sometimes you want to be able to do that. I could have made another element of it that would have allowed me to play pitched materials really quickly and whatever but I was more interested in trying to figure that out how to do that with the tools I already had. When it started to feel like a real instrument was when I started to figure out how to use what I had already made to make totally different things that I had not planned on

 

Diego: Are there aspects of the music that you leave to the computer?

Aaron: There is a lot of randomness. Some people are really into the artificial intelligence kind of thing where the computer has a lot of decisions; there are decisions trees, Markov chains and things like that. I just had certain elements randomized but in a range that I will set.  

With things like the granulators, the size of each slice of sound is randomized but I can make it short, medium or long. I will say long and it will be 1 – 2 seconds or so, if it is short it will be less than one hundred milliseconds, medium is somewhere in between. I can make it dense, medium or sparse; those are basically the controls over the granulator and the pitch. There are random elements to everything but I always instantiate the sound, I start the sound at least, it is mostly under my control. Randomness is more to insert variety into the sound rather than just to let it do whatever it wants to.

Some people creates these system where the computer is more an improviser they are playing with and mine I do not feel is like that, I do not feel I am improvising with my computer as another agent. For me my computer is the instrument and I am playing it. People like George Lewis with his Voyager System, which is an old artificial intelligence type system, was something that could be played by a person but would still be totally autonomous and listen to the performers as they play. He played trombone and he had a thing that would listen to him playing trombone and improvise and he did not have to do anything, he just started the program, it would run. I have never been interested in that sort of thing I think because I think that is appealing to people who play an instrument, who can improvise on another instrument and want to be able to play a solo show but not have it just be their instruments. Since I do not play another instrument to any degree of competency, I do not need to play something and have my computer respond to me, I’d rather just play the computer.

Diego: For you, what is the difference between a solo set and a collective improvisation?

Aaron: I use the exact same software, the same Max patch for both. What ended up happening was once I had a really satisfying improvisation system which allowed me to make the sounds I wanted in response to other people,  I realized that is was also a very good system on a whole so I started making solo pieces with that system, but that was not the original intent. It was started as something I could use to play with other people but once I developed it, it was powerful enough so that I could play by myself and create enough sound and variation. It is the exact same thing, I just play more. If I am playing with other people I will just play a little less, I will be listening to them and responding or ignoring them.

But my solo stuff is usually slower and focused on a single development rather than a whole lot of things, changes and all that stuff that you get with free improvisation. It was really just a happy accident that it ended up being just a good multipurpose instrument that I ended it with, because that is not what I was trying to do.

 

Diego: How did your practice session look like?

Aaron: Definitely the most useful thing is always playing with other people because that makes me work on my reflexes and stuff and make sure everything is easily available to me. I do not practice specifically but I will just sit down and play. I do not play scales or anything but I will just try something out, start with a sound, see what I can do with it, play with a single idea for a long time or things like that. I always just record everything I do, absolutely everything. I have way too many recordings of stuff because my patch just has a button that says record so every time I want to play around with stuff I will just hit it.Aaron4

 

Advantages of sample based music

 

Diego: It seems to me that you are more interested in using sound objects or found sounds to work with, synthesize with or generate sounds purely by the computer

Aaron: Yeah partly that is just a matter of convenience especially in improvisation. One of the things I have never really liked to much about modular synthesizer improvisation is that it takes them a long time, generally they will patch something with patch cables and have something setup. Whatever they setup is usually not very malleable, it is difficult to quickly change to another sound, which is, again, a very effective thing you can do in group improvisation.

Partly it was just that if everything is rooted in samples I can just change the sample to something completely different and the sound will change completely even if it is the exact same process that is setup. It is playing back at the same pitch, same effects, everything else is exactly the same, if you change from an accordion sample to street sounds it is going to sound completely different even without changing any other element.

Originally I actually wanted to do more synthesis because I like synthesis and also when I was thinking about the aesthetics of electronic music I was like, “what is the point of using samples because it will just sound like something else. It can sound like another instrument but I could always just get someone else to play that instrument. Why should my computer sound like a piano, it should sound like a computer so I will do synthesis, it is electronic not an acoustic instrument”. In doing that I found my sound very limited and it was hard to shift very quickly. Just changing the bottom of the chain into samples allowed me to switch really fast into something else. Then by just applying a lot of processes and the granulation it still sounds very electronic, you would not confuse my stuff for an actual piano even if I am playing a piano sample because I will do a lot of things to it and it does not sound anything like the original but just the fact that I can change it quickly allows for that sort of quick juxtaposition and things like that

 

Diego: Is the whole of your performance system built upon the idea of playing samples?

Aaron: Yeah basically. It is sort of modular so there are a bunch of different modules,  but the root modules are the granulators that I made. There are three of those so I can have three different samples being granulated at the same time or three of the same sample and different effects or different things. Beyond that there are effects that I have; things can be routed into one another. The granulators can also record internal process so I can route from one granulator into another one and then manipulate the sound that is coming out already

 

Diego: Like chaining the granulators?

Aaron: Yeah chain them together because let us say one of them is playing stuff back at double speed so it is up an octave and then the next one is taking that sound and then playing it back up another octave.

I did a piece a little while ago a few times and it is on my SoundCloud where I took speech samples and then slowed them down a lot, to like 10% speed, and then using another of the granulators I sped it back up to full speed. It was this weird effect, it sounded sort of synthesized but sort of still voice like but indecipherable because it was moving through the sound very slowly but at the right pitch. That was a nice effect

 

Diego: You also have some pieces where you work with actual prerecorded music, am I right?

Aaron: Yeah. I never put a lot of thought into the samples that I use in my music. When I was starting out I just kind of took stuff that was on my hard drive and threw it into the folder where my patch picks up samples. That was stuff I recorded myself in the studio, this accordion sound that I use a lot and some other regular instruments. But a lot of it was just music I had on my hard drive and I just tossed it in there. Sometimes it is just a song that I like and there is a moment that sounds really cool if I sample that and it is not very recognizable because it is just a fraction of a second or a single guitar note or something.

I feel like it is sort of Plunderphonic in a way so a lot of the time I will build a piece around a tiny fraction of an existing song. Knowing the source for me gives me a feeling of some emotional attachment to the sound and affects the way I am going to make a piece around that just because I know where it came from. A lot of the time what I am trying to do is capture a feeling of that moment that I really like in that song and make that ten minutes long or something, sort of extend that feeling out

 

Diego: The possibility of focusing on something that is really small but feels full of content, I think that is really attractive. It also seems like this kind of music even if it is very abstractly treated and processed, will still retain several of the qualities of the original. I think that people  who listen to it, even if they are not into abstract or experiential music, will find something that would help them relate to it.

Aaron: Hopefully, that is always the hope that you can appeal to someone who is not as much into that kind of thing. I am usually interested a lot in the timbral quality of an instrument or a sound so it is both the emotional attachment I have with the song when I am doing that sort of thing but usually there is a nice sound that I like that I really want to explore. Because it is such a small fraction it gets stretched out, I think it avoids being overly referential. No one is going to hear that and be like it is that song and not have to listen to the rest of it or something, it is not just about the reference at all.

Improvising with other musicians using a laptop computer

Interacting with other musicians

 

Diego: When you are interacting with other musicians do you have samples or do you do live sampling?

Aaron: I do some times. I do not know, I am always back and forth on whether or not I like doing live sampling but I do do it. Sometimes I do not like the way other people do that, I feel like it can be a kind of easy cheap effect and sometimes musicians do not like it if you sample them while they are playing, if they can tell you are doing that it will confuse them. It has happened to me several times where I have sampled someone and shifted them or something and if they can tell they are being sampled it will snap them out of what they are doing a lot of the time. If I am sampling live I will want to manipulate it so that it creates this responsive sound and it is not clear that it is from them directly because if it is just sort of echoing them, following along or whatever it often really confuses them. Also my feeling is if I am just sampling them and affecting them I am just an effect pedal. What is the point of me being there? I can just give them this thing and they can play through and it sounds cool. It is relying a little too much on what other musicians are going to do.

Again, one of the things I do avoiding other equipment, I just use the internal microphone on the laptop which sounds terrible anyway, it just picks up everything, you can get a lot of feedback and other things. It is not like I am doing a perfect sampling of someone else and manipulating in some fancy way, I am just kind of taking their sound and doing something with it but also there are other stuff going on, the crappy microphone is its own process, distortion and filtering, just by being really bad.

 

Diego: What do you think is the optimal way for interacting with other musicians?

Aaron: One of the problems with computer musicians is that they feel like their computer is doing everything and the human is superfluous, they are just pressing to play or whatever. What I am going for is a clear sense that there is a human making the sound, I am playing this, I am listening to them and responding to them and they can feel that response and that dialogue. Just having the same abilities as an acoustic instrument in terms of being able to play with other people is what I am going for, that versatility and responsiveness.

 

Aaron’s Patch

 

Diego: We have talked a little bit about your patch but I would like to know which other processes do you use besides the three granulators?

Aaron: I have a drum machine which just plays back drum samples if I hit a key but also I can make a pattern and have that loop. There a few delays I can use. Then some distortion and equalizations and things like that but it is mostly the granulators. However everything kind of plugs into one another. Sometimes I will take the drum sound and plug that into a granulator in real time so that when I hit it a drum beat, you will hear a chopped up version of that same drum beat, things like that. There is really not that much effects.

The delay I use a lot,  both as a reverb and as an echo delay. They come from a patch that I made a long time ago which creates twenty delay loops of randomized times within a certain range. I will have twenty delays between one hundred and two hundred milliseconds then you will get those massive sounds, you can also manipulate that in real time and change the length of the delays. It is also kind of another instrument. The source is usually kind of the bottom of the whole system, the granulators

patch

Diego: What do you recommend to someone who is starting on Max/MSP?

Aaron: It is really hard to get into with nothing. I feel like what I would say is I found it a lot easier to get into once I really understood basic synthesis ideas. Playing with a modular synthesizer is actually really helpful for Max and trying to understand what is going on, you have your oscillators, generators, you can plug things in together and things like that. Max is built on that idea but it is hard to see what is really happening if you do not understand the ideas behind it and it is a lot easier to tell what is going on when you have these physical things in front of you that you are playing with. You can turn a dial and hear the pitch go up and down, plug that into another oscillator and you are modulating the frequency of that one with the first one, you can turn the dial and here the way the timber changes. That is all stuff you can do in Max, but you kind of have to know how to do it first before you can do it, how to set up that system, you cannot just plug something into something else because that does not always do anything.

 

The San Francisco Bay Area’s experimental electronic music scene

Diego: How would you describe the experimental electronic music scene here and what trends do you identify?

Aaron: There are two major streams in the experimental electronic scene music here. There´s the noise music which tends to be younger punk kind of kids and people who have been doing it for a long time, and then there is the Mills people who do a similar thing because we all studied with the same people and they do that kind of thing, John Bischoff and Chris Brown who are two of the professors. There are a lot of people who do laptop stuff and then there are people who make their own gear which is another thing that happened at Mills, people who learn how to make their own synthesizers, physical effects or circuitry stuff. Then there are the people who just buy a bunch of pedals, plug them in and make noise with them. The experimental electronic music, outside of the noise scene it is not a very big area here which is kind of surprising; it is not as big as the instrumental improvisers which is what I see the most in here. There are a lot of people who use the computers as effects primarily for their other instruments they are using, that is probably a more common thing than people who are just playing laptop and there are a lot of people who are just doing modular synths, effects boxes and things like that.

 

Electronic music and science

 

Diego: I was intrigued by one of your tracks, the one in which you are on an airplane and are playing with sine waves, and the lady asked you is this music or science.

Aaron: That is an old one.

Diego: And then you said “a little bit of both”.

Aaron: That was a funny piece. Then was when I took Pauline Oliveros’ class at Mills. The assignment was to make a piece in an unusual place. The early assignments were to make a piece with an unusual melody or an unusual something or other. This was to perform a piece in an unusual place. That weekend I was flying to Vancouver so I decided to try something on the airplane. But I just had my laptop with me and ended up doing this sort of Alvin Lucier type thing where just played sound waves out of my computer speakers which were roughly tuned but slightly out of tune with the engine which was 300 hertz or something. I faded them in and out. That piece I really liked, and that was just a nice moment. I stuck my Zoom on top of my computer so that I picked up a good amount of the speaker noise because the speakers on these things are not very loud and the plane was really loud. I could hear it but it was mostly that you could hear this beating against the engine noise, you could not really hear the sine tones, it was like something kind of happened but it was not really obvious that anything happened if you were not paying attention to it which was really cool. I think that kind of comes across in the recording where it is really subtle. That piece turned out really well, there were people talking in German about 30-Rock behind me and stuff like that. Right at the end the woman next to me asked me what I was doing because she could see that I had a max patch open and it was this weird thing and she was wondering what it was.

 

Diego: Do you think there is a relationship between the kind of music you do with science?

Aaron: Yeah I think so to some degree. I have internalized a lot of ideas from psychoacoustics and the way we perceive sound through Alvin Lucier who I think was a huge influence on everyone. Knowing how to get these psychoacoustic effects is kind of related to science, knowing how sound waves work, how synthesis works and internalizing those ideas I think does make it a little bit related to science. I am not just playing something and being like “oh I like that”. I can imagine a sound and then figure out how to create it purely through computer systems.

It is definitely related to science. That was always a criticism that people made of Alvin Lucier that it was not really music it was just science experiments or something which I thought was really stupid. But it is true that in his stuff especially, there is this element that is so minimal and he has such clear process based music. He has a very musical sense and he knows what he is doing, he came to Mills one year, I spent quite a bit of time with him and he thinks musically he does not think in terms of science, he does these literally experimental pieces and process pieces because he thinks it will have a musical effect not because he thinks it will be cool. He thinks it will sound nice, interesting. There is still that element of science and you have to understand how sound works to be able to do that sort of thing.

Website
Bandcamp

Other proyects

Soundcloud
Gifproblem on Soundcloud

Contact

Email: aaronop@gmail.com

Leave a Reply

Your email address will not be published. Required fields are marked *