home | recordings | compositions | press | services | instruction | articles | studio | biography | credits | links

        Fanfare Interview with Colin Clarke (2022)

A Conversation Between Composer Jerry Gerber and Colin Clarke

 

It has been fascinating to get to know Jerry Gerber’s unique music via the disc Earth Music (which includes his Symphony No. 10) and Cosmic Consciousness (which includes the Eighth Symphony; both discs are on the Ottava label). A feature article which found Gerber in conversation with Robert Schulslaper appeared in Fanfare 45:4 and discussed in some detail Gerber’s unique mode of composition: he writes for “virtual orchestra” via computer. Inevitably, this was to be the starting point for our discussion.

 

CC: Looking at your decision to work in the way you do, at what point in your development did you decide to cross over to non-human performers? And was there a particular catalytic event, or was it a slowly-dawning realization? Or did it grow naturally out of your film music or, perhaps in particular, your music for computer games?

JG: I started playing around with tape recorders when I was about 11 years old, only a few years after beginning music lessons on the accordion.  I’ve always been kind of a tech geek. In high school I studied photography seriously and worked for three years in a photo lab developing color film. But from the time I started playing music up until my early 30s, there was no digital technology that could create a musical “performance” without live players. There were mechanical means of creating a musical performance long before that. Wikipedia documents that the first drum machine was invented in 1206 by Al-Jazari, an Arab engineer. Numerous other types of mechanical sequencers also appeared between the 15th and 19th centuries, but it wasn’t until around the 1940s-1960s when analog electronic sequencers began to be invented. MIDI (Musical Instrument Digital Interface) sequencers didn’t become available until the early 1980s. MIDI data is a computer language that defines a note’s frequency, velocity (amplitude), when in time the note starts and ends, how that note is attacked and released and other information that includes tuning, pitch-bend and additional parameters that can be user-defined through what’s known as MIDI controllers. For myself, it was a gradual process of exploration until one day in 1985 when Gary Leuenberger, who owned a piano store in San Francisco, programmed the entire first movement of Bach’s Brandenburg Concerto No. 5 in D-Major on the Yamaha QX-1 digital sequencer, which sent the MIDI data to eight different synthesizer modules. While listening to Bach’s music, an excitement and clarity came to me and I knew with certainty this was the musical medium I wanted to work in.

 

You say one of the qualities you value deeply about music is its ability to induce euphoria. Do you find any contradiction in the fact you’re using non-human elements/performers to induce these states?

I find most things about myself and other human beings a contradiction. I don’t see how creatures that evolved from cosmic dust and primordial animal life and who’ve acquired language, art, ethics, science, technology and institutions that outlive our individual lives cannot be fraught with contradiction. Though I am using a machine to create music, is not a trumpet player or a cellist doing the same thing?  We’re not going to stumble upon a trumpet or a cello while walking in the woods, these instruments do not grow from the soil in the natural world, but are invented, built and improved upon by humans. By that definition, are they too not machines? It is the human player who is gifted with musicality that feels and inspires the euphoria; it is a state of mind within the musician and the listener, not within the physical instrument. As I hum along and feel the music I am composing, I am conveying expression, energy, meaning and imagination through the instrument I am working with. As a painter projects his soulful artistic impulses onto a canvas, or a pianist transmits her soulful musicality into the piano, a composer working with computers is doing likewise.  The medium is different, the humanity the same. Because the computer can reproduce musical timing that is uncannily accurate, there’s an old joke among electronic musicians that goes “Acoustic musicians spend their entire career learning to play on the beat while electronic musicians spend their entire career learning to program off the beat!"

 

What are the creative challenges of working in this way? And what are the rewards?

The usual challenges for any artistic creativity are of course present: trying stay healthy, maintaining attitudes conducive to living a creative life, embracing the necessary solitude and time to work, etc.  In addition to these challenges, there are the additional technical challenges of maintaining an electronic music studio and ensuring that everything is working as it should. This requires facility with computers, software, audio signal paths and troubleshooting when a technical problem arises. 

 

You inevitably will have a vast amount of control over expression etc.?

It’s certainly liberating to know that the right notes are going to be heard and that I can edit to my heart’s content a passage so that the dynamics, phrasing, tempos, attack and releases of notes and articulation are going to be what I am imagining. It’s also, to put it plainly, a lot of fun. We don’t usually say “musicians work music”, we say “musicians play music”.  That element of play is crucial to the artistic and creative process, it’s like losing yourself in a childlike playfulness that is so utterly absorbing and enjoyable. Of course it is work too, it requires diligence, conscientiousness, discipline and practice, but because love is also present, the union of work and play feels seamless and complete. 

 

That’s a brilliant point about the choice for verb “play”!. I wonder, can we work our way through the equipment you use (and list)? Firstly, the concept of a sound library. I remember these from my days at the MCPS and they were libraries of certain types of music that were licensed for a particular rate if I remember correctly …. here you have used three sound libraries – Vienna Instruments, Symphonic Cube (or is that just one) and Requiem professional. Can you tell me more about how this works?

The term sound library takes on a different meaning than the one you mentioned. In the 1950s through the 1980s a sound library was a place where a film or TV producer or a producer of advertisements could go to select pre-recorded music to use for a production. This was much less expensive than hiring a composer and an ensemble or orchestra to write and record an original score. A sound library, better known today as a sample library, is not a set of recordings of music itself, but rather of individual notes and techniques.  For example, let’s say we want to create an orchestral sample library. The producers will have each instrument (or group of instruments, say 3 trumpet players) come into the studio and play every note that instrument can play in multiple dynamic levels (p, mp, mf, f, etc.) and in many playing styles, a violinist will actually record thousands of samples.  Every note the violin is capable of playing is recorded as a sample, in 4 or more dynamic levels, in many playing styles such as pizzicato, marcato, staccato, legato, harmonics, col legno, etc.  If the violinist, and every other player in the orchestra, plays each note with feeling, with a sense of purpose and expression, we end up with hundreds of thousands of usable samples (my current sample library, the Vienna Symphonic Library Orchestral Cube, consists of about 764,000 samples) As a music producer, I can program the computer to play whatever note I need, in whatever playing style and dynamic level the particular musical passage requires.  Computer timing works in microseconds and milliseconds, so changing playing styles, changing dynamics and attack and release times can be done without losing even a fraction of a beat, although in some cases we have to account for a tiny time lag and make adjustments, similarly to how a string player needs a bit of time to go from pizzicato to bowing.

The Requiem Professional library is a vocal library, similar to an orchestral library but instead of instrument samples the samples are of humans singing, both solo and in choirs. The principle is the same, the composer chooses which sample-set works best with a given musical phrase or passage. I’ll say a little more about vocal samples and why I use real singers below.

 

And how does that interact with “Soft Synths” and Samplers (you list Kontakt, Massive, Tera, Rapture and FM8)? And can you explain the concept of a “Soft Synth” please?

In the 1950s a sound synthesizer could take up an entire room with electronics, vacuum tubes and cables. In the 1970s the first synthesizers became commercially available and were attached to musical keyboards. The first commercially available digital synthesizer with MIDI ports was manufactured by Yamaha in 1983 using FM synthesis (frequency modulation synthesis) developed by John Chowning at Stanford University. By the 1990s, synthesizers were being developed that were actually software programs working within a computer under the operating system as all software does. These “softsynths” work as musical instruments in conjunction with a DAW, a digital audio workstation, where the DAW sends MIDI data to the softsynth similarly to how it sends MIDI data to a hardware synthesizer with MIDI ports. Softsynths are incredibly flexible musical instruments, capable of an extraordinarily wide range of timbres. They are complex instruments, and it takes time to learn how to use one. But once you’ve learned, it becomes easier to learn others because there are many similarities as to how they process sound.  A softsynth consists of oscillators that produce tones, filters that control harmonic content, envelopes for controlling how a sound begins and ends and what happens during the duration of the sound and numerous other components including arpeggiators, reverb, delay and other signal processors. There are now people who earn their living programming software synthesizers in wonderfully creative ways. In my music I will often use a timbre created by another musician as well as sometimes programming a timbre from scratch. 

 

And I guess the mixing (via Yamaha DM2000) is seen as a vital part of the compositional process itself?

I retired the Yamaha DM2000 mixing console in 2017 after using it for 14 years. Audio and digital technology have advanced to the point that I can accomplish the same tasks with equal sound quality using a small rack mounted unit (the MOTU 1248 audio interface) that I could with a large mixing board like the DM2000. With the exception of computer screens and musical keyboards, smaller is usually better.

 

Can you explain the function of signal processing (Yamaha SPX2000 and Ozone 8 Advanced) for the tech newbies out there?

In music production, signal processing refers to what we do with the music once it is recorded.  If the timpani are a bit too boomy, before committing the score to a digital audio file we can use equalization to reduce the amplitude of the low frequencies without affecting the other instrument’s low frequencies. Compression is another example of signal processing, where we shrink the dynamic range a bit so that the soft parts are not too soft and the loud parts not too loud. There’s also multi-band compression which is used, for example, when the strings are too shrill at higher volumes but in the soft parts they sound fine. We don’t want to use equalization (EQ) because that will alter the characteristics of the sound at all volumes; instead, we use multi-band compression so that if the music gets too loud at the higher frequencies we lower the string volume at the offending frequencies only. Another example of signal processing is reverberation, which simulates the characteristics of a hall, or room, or any other environment that music is played in. Every environment has specific characteristics in regard to how it reflects and absorbs sound waves.  Some rooms are very dead, there’s almost no effect on the sound. A bathroom or kitchen are examples of a very live sound environment, the metal and glass surfaces will have a definite impact on how sound waves are reflected. Reverberation can simulate a great concert hall’s characteristics which adds to the sense that the music is happening in a real “space”, rather than in artificial isolation.

Ozone is a mastering program that contains these and other modules for processing the sound before it gets committed to a final master.  The SPX2000 is primarily a reverberation and delay hardware processor that can simulate reverb and delay effects.

 

Even when it comes to the equipment, there are artistic choices to be made, would you agree? (I imagine this is part of the “creative challenges” I refer to above?) For example, I see your website has a long list of equipment you use, so why these particular programs for this particular music?

Yes, I do agree. There is software to record audio and MIDI data that does not contain a notation view, also known as staff view.  This means that the musician cannot see or edit the music in standard music notation; for a classically-trained composer, it would be nearly useless.  Most trained composers understand the incredible value of music notation, a symbolic musical language that has been evolving for over 1000 years. Without it, the art of counterpoint and orchestration could have never developed; no symphony, string quartet or piano sonata could have been created without music notation. So in choosing a DAW, I use one that has the ability to display music in notated form. Luckily, there are about five or six to choose from that now have this capability. The one I use exclusively is called Cakewalk, it’s been around since 1987 or so and continues to get more sophisticated and powerful every year.

Another example are loudspeakers. The monitors one uses to mix and master music are probably the most critical piece of equipment in the studio. Great monitors are expensive, but they reveal everything in the sound that is there. While a lesser monitor might emphasize certain frequencies to make the music sound more impressive, the best studio monitors do not do that. Instead, their purpose is to reveal as truthfully as possible what actually is happening in the sound. Only by being able to hear a problem can the engineer fix it.  We can’t fix what we can’t hear. The monitors I use are a pair of the Adam S3H powered speakers. 

 

What is your compositional process, or does it vary? Do you actually compose a score as if for traditional, acoustic, human-operated performers and then transcribe/program that via computer for electronic orchestra? Or is it done directly to the electronica?

Often I get ideas sitting at the piano and improvising.  If the same idea keeps spontaneously appearing eventually it very well may end up in a composition.  Other times I sit at the computer screen, mouse in hand, and pop notes onto the staff. That also may be the start of a new composition.  I’ve found there’s no one right way to begin a piece, whatever works. Results are more important to me than procedures. Sometimes a melodic idea or motive happens first, sometimes it’s a chord or a harmonic progression. Other times a composition begins because I am listening to a timbre that moves me. Usually when I start a new piece I have some idea of what I am writing, i.e. whether it’s to become a short electronic or piano piece, the first movement of a symphony or concerto, or a song. After a short beginning I usually know if the piece is going to “fly”, meaning I am committed to working on it until it is finished.  If it’s not growing on me I will abandon the idea and start again with something else. About 80% of the time I commit to the new work, about 20% of the time I drop it and start with a different approach. 

 

And to extend that—do you find the way that you yourself internally hear the music you compose has modified? Do you hear it as performed in the first instance by an electronic orchestra?

Unlike many musicians who use the same or similar technology that I use, I actually write with and for the medium I have chosen. I don’t create “mock-ups” with the intent to transfer what I write to live players (with the exception of singers and soloists). I hear the music in my mind, and then make the effort to create what I hear in the electronic medium. Sometimes I come up with an idea spontaneously playing the piano or synthesizer keyboard that I don’t hear in my mind first, it grows on me and I proceed to sequence and compose the piece. In other words, I am hearing what I am producing while I am composing it. That’s one of the great advantages of this medium. When I was composing in the 1970s, all I had was paper, pencil, piano, metronome and a desk. If I were writing for an ensemble, I’d sit at my desk and orchestrate the music. But of course I could not physically hear the orchestration until I could get players together to play it. The time between composing and actually hearing it played could be days, weeks or months. But with the computer music studio, I can hear what I am writing the moment I write it. I love that instant feedback because it helps avoid what Aaron Copland called the ”rude awakening of miscalculation”—when a composer gets to the rehearsal and realizes what they’ve written does not sound as expected.  If something doesn’t work as I expect it to, I know immediately and rework the idea. This really helps to hone one’s composition and orchestration skills. Joseph Haydn once wrote that he could have never grown as a composer had he not had an orchestra show up every day to work with him. Today, we composers can hear our music every day too, and we don’t have to wear a white wig to please the aristocrat who was paying our salary!

 

And you also use virtual choruses as part of your canvas? But I do notice you have used a human voice before (Kira Fondse on Home & Love , while Cathy Colman reads her own poetry against synthesizer music in Body Politics). Is that an indication of a slightly different take on voice as against electronic voice? Greater flexibility perhaps? Also there’s a Concerto for flute and virtual ensemble which presumably mixes live and pre-recorded?

A human voice is a very different instrument than every other musical instrument because it is inside the human being, not something we bow, blow into, strike, pluck or pick. There is no animal gut, metal, bone or wood creating the sound.  It is a living person, a sentient being. There is something so utterly personal about the human voice that I always use live singers in my songs. An exception is my Nine Hymns on Spiritual Life, which is on my newest CD.  In these hymns I use a very well recorded sampled chorus with AI software called a “word builder”. This software allows me to type in the words I want to choir to sing.

I’ve written to date four concertos, one each for flute, violin, clarinet and oboe, each accompanied by virtual orchestra. The oboe concerto is the only one that uses a sampled solo instrument, the other three are recorded with live players. Keep in mind that when I use the term “virtual orchestra”, I caution people not to take the term too literally. One musician is not an orchestra and can never be an orchestra. When sixty or seventy musicians are making music together, the social, psychological and spiritual energies are unique, particularly in live performance.  I’m not interested in trying to duplicate that exactly because it’s not possible. 

What I do is write and produce music in a computer music studio. It’s easy to adopt the term “virtual orchestra” because at no other time in history could a single musician create and record a multi-timbral, complex piece of music without using live musicians. I’m not motivated to fool anyone into thinking I’m creating a recording of a live orchestra. Instead, I’m conducting an ongoing experiment in recording music using new audio and digital instruments that allow for possibilities that didn’t exist before. 

 

Obviously, your music is performable by a regular orchestra, but I can imagine those human slight difference of attack and ensemble would create a very different effect? Programmed attacks have a certain accuracy humans would find difficult to achieve... and I strongly get the impression that this is where your music’s home lies, in the electronic sphere.

I think you’re right. As I said earlier, I am writing with and for a new medium. If I were to seek a performance of an orchestral piece I’d have to make numerous adjustments to get it to sound the way I want.  I am a practical musician; depending upon up to a hundred musicians, a choir and several synth players to hear one of my symphonies is not my idea of being practical. I agree with you, my music belongs in the electronic music studio medium, as Chopin’s music belonged with the piano, Mahler’s with the large orchestra and John Renbourn’s with the guitar. 

 

For your Eighth Symphony, you bring in ideas of a Cosmic Consciousness, an “other” area which composers tap into. It’s a term that has been used by the New Agers but also by mystics such as Rosicrucians. What is your understanding of this “field of possibility”?

The term “cosmic consciousness” may have first been introduced by Dr. Richard Maurice Bucke, who lived in the late 19th century. The idea is very close to what I think Jesus meant when he used the term “Kingdom of Heaven”. It’s the idea that within each of us is a spark of the infinite and absolute—-the first cause of energy, matter, time, space, consciousness, personal reality and being. Whatever and whoever created the universe is within each of us. 

Albert Einstein once said that we cannot solve a problem in the same state of consciousness that we were in when we created the problem. A good example is the problem of war. War is an institution and an industry, it is humanly-contrived to resolve a social, political or economic problem regardless of the stupidity, immorality and wastefulness of the whole barbaric enterprise. Though aggression and violence are natural to animals, modern warfare is far too technical, strategic and organized to be considered natural, it is no more natural than peace. But peace is superior, desirable and necessary, while war is none of these things. To abolish war we need a new state of consciousness that is built upon empathy, respect, cooperation, communication and compassion. In addition, international law must evolve so that we can hold all heads of state and their co-conspirators personally accountable for the suffering, grief, heartbreak and destruction they perpetrate on us with their political violence. We will not abolish war in a state of mind that embraces anger, revenge, hatred or desire to dominate others, the very qualities that produce war in the first place. 

Cosmic consciousness is partly discovered through science. When we see a mind-expanding Hubble photograph of a galaxy millions of light-years away, we get a sense of how mysterious, timeless, and huge the universe is, and how small and insignificant we are. Yet consciousness itself isn’t insignificant. Cosmic consciousness is not discovered only through science, but also subjectively, through what is best in human nature. We are the product of millions of years of biologic evolution, but we are also, to a degree, capable of transcending nature, time and space. At this moment I am having a conversation, a meeting of minds with you. But I am sitting in my office in San Francisco and I don’t know where you are or what time it is where you are [in the UK]. Through language, technology, memory and imagination we can transcend time and space. 

Others would say that cosmic consciousness is the personal insight that love is a real energy in the universe, an objective force, as gravity and electromagnetism are real forces in the material universe. When we humans are loving toward one another, accepting of one another and understanding one another, we are at our best. Spiritual geniuses like Christ and Buddha tried to reveal truth to us, but it’s clear that our ethical and spiritual development have not caught up with our material and technological progress, which, given the global problems we face, this imbalance is becoming a genuine threat to our species. We have some rebalancing to do. In meditation, we practice learning to be in the moment, we learn to use the mind in a new way, to sense and feel a more subtle reality that is always present, yet takes practice to perceive. 

 

You mention time as an aspect of consciousness – if it (time) is a shared illusion, how does that impact the ideas behind this symphony?

Interesting question! Music plays with time; we can slow it down, speed it up, even stop it momentarily with a fermata.  We can even make time go backwards, metaphorically speaking, in writing a piece in arch (ABCDCBA) form. Still, we’re just playing an intellectual game with music. In real life time is the only thing that seems to move in one direction only, we can physically move up or down, left or right, backwards or forwards, but time seems to go only one way. But maybe this will one day be proven to not always be the case.  

 

I’m intrigued your link to the Urantia Book, which I believe to be a channeled book and which also massively influenced Karlheinz Stockhausen, not least in his massive cycle, Licht. Your modes of expression are very different but maybe your intentions are not so different! What draws you to the Urantia Book, and how has it influenced if it has, your Eighth Symphony?

I’ve been a student of the Urantia Book since 1981. I’ve read the whole book twice (it’s over 2000 pages) and still read often. I couldn’t possible say all there is to say about it in this interview, but I can say it is the most intriguing, mind-expanding and provocative book I’ve ever read. I don’t know how it came to be. The authors (there are numerous authors) seem to be very concerned about the evolution of human civilization and the growth and maturity of the individual person. The book covers spiritual matters, moral and ethical development, science and cosmology, history, religion and about one-quarter of the book is devoted to the teaching of Jesus. It has readers from all over the world, but there is no church, no rabbis or priests, no rituals, no costumes or uniforms, so one can hardly call it a religion in the conventional sense. And yet it powerfully inspires religious feelings in people.  I believe the Urantia Book is true, but not infallible or perfect.  If it is true, and there’s no evidence that it is not, the cosmos is far more friendly, governed, evolutionary and progressive than we have conceived it to be. I grew up in a Jewish home and got kicked out of Hebrew school when I was 12 for being disruptive and rebellious, but became very interested in psychology and religion when I was in my late teens. 

Far too often, religion is weakened and dumbed down by nationalism, racism, sexism, rigid fundamentalist dogma, superstition and scientifically incorrect assumptions and conclusions. The Urantia Book teaches that there is no conflict between mature science and true religion; the facts regarding the physical universe do not conflict with the highest values of the greatest religions, which are concerned with the capacity to love and to trust in an ultimate sacred reality, yet we exist in a universe where we are inevitably faced with impermanence, uncertainty and physical death. The Urantia Book also speaks of the dangers of nationalism, and that as we develop loyalties to larger groups of people, we create conflict with other groups. The primary loyalty is to the family, the ultimate loyalty is to the brotherhood and sisterhood of humanity, planetary sovereignty. We call ourselves a civilization but the fact remains we are only partially civilized. A fully-realized human civilization has achieved sex equality, racial harmony and the abolition of war.

Everything regarding the physical sciences requires evidence, proof, repeatable experimentation—the mathematics and physics must correspond to how reality is structured; otherwise it is wrong or incomplete and needs to be re-worked. Science seeks to know what is objectively true about the physical universe. Religion seeks to uplift and transform the quality of subjectivity. It’s about personal growth, service to others, integration of selfhood and capacity for gratitude. We as humans are profoundly interconnected with each other and the universe, and interdependent upon each other.  The good that one person does affects us all, the evil one person does also affects all of us. 

Every person who loves the arts knows that the scientific method is not the only true path to knowledge.  If I go to the concert hall and listen to Mozart’s G-minor Symphony (No. 40) and come away enthralled and inspired, I could go into the audio lab and analyze all the frequencies, amplitudes and waveforms in the piece and get a better acoustical understanding of the music. But none of this information will explain why the piece moved me, why it triggered in me ideals, awe towards beauty and a desire that I want to commit my life to something bigger than myself. The Urantia Book has a similar effect on me. 

 

You seem to like traditional forms, particularly the symphony. What appeals to you about working on the large scale?

I’ve said this before in another interview, but it’s worth repeating:  The term “symphony” comes with a lot of cultural preconceptions.  There’s the hall, the conductor, the nicely dressed audience, the players. My definition of the term is limited to a multi-movement, multi-timbral work in which the composer makes a lot out of a little, the “economics of composition” is what I like to call it.  The symphonic ideal is to develop themes, motives and other simple ideas into a larger vision of the possibilities of those ideas based on variation, repetition and development. For me, form evolves from content.  I don’t work in pre-conceived forms and then fill it with musical ideas. I write music and see where the musical ideas take me, the form materializes as I set the boundaries of the piece or movement. 

 

The titles: for the “Five Pieces for Virtual Instruments” are very individual. “Seraphim on a Subway” - can you explain?

Something in a piece reminds me of an idea, another piece, or an experience and so I name the piece.  Seraphim on a Subway is based on the solo soprano voice that is used in the piece (the seraphim) and the propulsive energy of the softsynth moving like an steady engine (the subway).  After writing the piece, I thought of people going to work on the subway, each one immersed in their own world, reading the paper, drinking coffee, engaged with their thoughts, playing with their phones. We are often so unaware of the larger picture, the fact that human beings are being transported through a tunnel in a city on a planet spinning around an ordinary star in a stupendously gigantic galaxy.  I was imagining how a non-carbon based highly intelligent life form might experience the scene. 

 

There’s a playfulness there in these five pieces, would you agree? Reflected in titles such as “Baroquette”?

Yes.  Music is the outcome of play and work.

 

You are clearly prolific – how else are you going to expand your ideas?

I don’t know, time will tell ....

home | recordings | compositions | press | services | instruction | articles | studio | biography | credits | links