home | recordings | compositions | press | services | instruction | articles | studio | biography | credits | links

        Fanfare Interview with Jacqueline Kharouf

 

Jerry Gerber is a composer for virtual orchestra. He composes works that exist in the digital space, for instruments that have been recorded as sample sounds inside computers, and other sounds created from synthesizers. To arrange and collect these sounds into music, Gerber uses MIDI technology—a musical instrument digital interface—facilitating a connection between devices and sounds. It is this technology which allows him to control the rhythm, volume, and tone of his virtual orchestra. Instead of writing for physical instruments or live singers, he is more like a programmer writing code—each symphony a whole digital space created from sounds that will never change, or never be reinterpreted by another conductor (or programmer) of a different virtual orchestra. Such a way of composing music—in short, of creating art—is fascinating and in some ways deeply disturbing to the expectation that all music should be written for physical instruments and human performers. The possibilities for this music are quite literally endless and seem to hint at a shift in perspective (finally!) for the all-consuming category of “classical.”

Jerry Gerber composes music for orchestra, chamber and choral groups, along with songs, piano music, and other pieces for electronic instruments. He has written music for film and television, including music for both the movie and children’s television show The Adventures of Gumby, as well as music for computer games, such as Loom, from Lucasfilm. With degrees in composition and classical music theory from San Francisco State University, Gerber composes long-form works, gives guest lectures, and manages his own electronic music and recording studio. (I encourage readers to visit his website, which includes technical details about his studio, including the software, synthesizers, and digital recording devices Gerber uses to produce his albums, as well as pictures of the recording space: jerrygerber.com.) In the following interview, I asked Gerber about his technical work process, his 2019 studio release Earth Music, as well as a bit about the difference between music for virtual orchestra and other forms of electronic music.

I wanted to ask you about what I assume must be the somewhat tedious technical process of writing music with sound libraries and synthesizers and computer software. I wonder if you might describe this process with details (because I love some detail; if you can’t tell, please see below). How did the idea—or, perhaps, theme—for the four movements of Symphony No. 10 come together? Do you hear it first in your head, or does it emerge from the computer, from hearing other sounds (either from the sample libraries, perhaps, or synthesizers) and then organizing those sounds into melody and harmony, etc.?

When a DAW (digital audio workstation) receives input in the form of a musical note, the computer records the pitch, duration, velocity (volume), and location of that note. An example of the note’s location could be 23.3.240, which means measure 23, beat 3, tick 240 (the quarter note is divided up into 480 ticks, so a note that occurs at 240 ticks starts half-way into the beat). Every note in a composition is recorded in this way. When I am looking at the notes in the staff view, I see SMN (standard music notation), which is essentially the way I’d see the music on a written manuscript. The computer also displays this information as an event list, which is a list of every detail in the composition. MIDI CCs (control changes) are the programming tools we use to change the volume of a phrase, change slightly the tuning of an instrument, adjust attack and release times, and convey many other instructions as to how an instrument reacts to the composer’s musical imagination. The software sends this information over an Ethernet cable to another computer, which is dedicated to the Vienna Symphonic Library. This all happens in milliseconds, which allows for very precise timing and synchronization of all the musical parts. A modern DAW can sequence many instruments at the same time. My studio uses five MIDI ports, and each port can send 16 different instruments playing 16 different musical parts at once. Five ports x 16 instruments means at any given time I have 80 different instruments playing different musical parts at once. I have more available if needed, but 80 is usually more than enough. This doesn’t include all the synthesizer parts, which the computer handles a bit differently.

In my orchestral sound library, if I choose to orchestrate for first violins, there are dozens of sample-sets, each one containing thousands of samples that include short and long notes, dynamics (notes that get louder or softer as the note sounds, including fp, sfz, etc.), pizzicato, mute, fast repeating notes, legato sample-sets, and more. There are samples for every note an instrument can play. Within each set of the samples are further sub-divisions; for example, in the dynamics patches, a note can crescendo for 2 seconds, or 3, 4, 5, or 6 seconds. And by inserting a MIDI command, I can have those same notes diminuendo, getting softer as the note sounds. My current library, the Vienna Symphonic Orchestral Cube, consists of over 764,000 samples.

The themes in my 10th Symphony might sound like they flow out of me effortlessly, but that is an illusion. The amount of editing I do, trying and discarding elements in the theme and in the composition, is extensive. Occasionally the initial impulse is good from the start, but it’s far more likely that the theme needs be worked on for days before I am satisfied that each tone is the best tone at the right time.

What makes a note right or wrong? What makes it the best note? This is a highly intuitive process. I can’t prove I’ve chosen the best note for the theme, but I can feel I have done so. Sometimes one note works better than another, but a third attempt may work better than the previous two, and substituting still another note might create that eureka! moment. This is one difference between art and science. In science the scientist has to verify what he or she believes to be true with physical reality. The mathematics and physics must correspond with experimentation and observation; otherwise it is considered false or incomplete. But in art, the artist has no external reality other than the highly subjective, intuitive creative imagination and his own sense of needing to give expression to an idea, a feeling, a longing, or a vision. The artist and the scientist are ever searching for more knowledge, more understanding, and better ways of giving expression to experience. The search for truth, beauty, and righteousness is the underlying principle that allows culture and civilization to exist and to evolve.

 

And I wonder if you might also describe a bit of the space where you compose. Do you work in a home studio? Do you wear headphones? Are you something of an audiophile with regards to speakers or sound quality as you compose?

I compose and orchestrate using headphones because at that stage of the process I am focused on composition and ideas. When I make the final mix and master the recording, then I work over loudspeakers, because I am at that point focused on the sound as a whole rather than the ideas that make up the sound. However, I’m oversimplifying the process because of course I’m concerned with sound while I am composing and orchestrating. Sometimes when working on the final recording I hear a problem in the composition and/or orchestration that needs to be reworked. But in general, I begin the process with headphones and end with loudspeakers. If I were not an audiophile, I doubt I could work in the medium I do. My loudspeakers, the Adam S3Hs, are extraordinary tools for mixing and mastering music.

 

I also wanted to ask about how your method of writing now is different, or much improved, from the methods you used prior to recording samples and/or other tools that you now use. Are there any aspects from these early days of electronically or virtually composed music that you miss? If there were something that you might want to improve or make more accessible to new composers writing in this way, what would you improve or change?

No doubt over the years there have been many improvements. My technique, knowledge of composition, and understanding of my tools have improved, as have the software, computer technology, synthesizers, and music libraries. I don’t miss anything about my earliest attempts at composing music using synthesizers. I do however remember the day when I just knew that this is the medium I want to work in. I got a call from a friend of mine around 1985; his name is Gary Leuenberger and he owned a piano store in San Francisco where he sold Yamaha, Baldwin, and Steinway pianos. Gary was also the chief programmer for the first commercially available MIDI digital synthesizer, the Yamaha DX-7. Gary sequenced the first movement of Bach’s Brandenburg Concerto in D Major using eight DX7 synthesizers (the equivalent electronics in a rack called the TX-816) and a hardware sequencer. He invited me and a few other musicians to listen to what he sequenced. When I heard Bach’s music coming through the loudspeakers I knew at that moment that this medium was going to evolve and I wanted to be a part of it. At that moment I also realized that electronic music and classical composition were going to become very good friends.

 

After reading your excellent interview with Robert Schulslaper (45:4), I wanted to ask more about your discussion of sound and silence. I was fascinated by your answer because I think there is a perception that electronically composed music is innately “noisy,” and that because you, as a composer, have all of these sounds at your disposal, you can write music without any rests or breaks (such as pauses for the breathing requirements that a woodwind player would need, for example). Or to put this in another way, I’m trying to make a distinction between electronica and the music for virtual orchestra that you write. I think silence—and the space between notes—is key to that distinction, but I wonder if you are ever concerned about a listener’s confusion (and therefore that they may misunderstand how much more thought, theory, and compositional techniques distinguish your work from someone else just doing a pop remix, etc.). Should the average listener make this distinction along the spectrum of electronic sounds or, in your mind, is there no distinction at all because both the remix and your symphony have been created on computers?

Well, there’s music and then there’s the medium we use to realize the music. Let’s not confuse the two! We can listen to a gifted performer who sits down at the piano and plays Chopin with such expression and masterful technique that we are mesmerized and deeply appreciative of the performance. And we can also walk into a party (well, pre-Covid!) where there’s a piano and be subjected to some clumsy soul banging on the keys with no sensitivity, no phrasing, no musicality to speak of. It’s the same instrument, but a very different experience. If we substitute “computer music studio” for “piano” we have the same issue.

To address your question about silence: Phrasing, articulation, gesture, and intention require us to take silence into account. Otherwise it’s barely music. Just because a note played by the computer can be held for far longer than an acoustic musician could sustain it doesn’t mean it should. If the internal dynamics of a given tone are too static, holding it for a long time is usually detrimental to the music. Mechanicalness is the death of artistic expression. Playing the piano or violin badly can sound mechanical, lifeless, without that human touch. The same is true with computer music. Because a computer can play notes with rock-solid timing and an uncanny “perfection,” it is critical that we do not let that degrade into a mechanicalness, devoid of expression, dynamics, and a sense that there’s an artistic human consciousness behind the notes. The greatest praise I’ve ever heard given to a composer is when someone said (I paraphrase) that Mozart’s music is so simple a child can enjoy it and so sophisticated that the most erudite musician can appreciate it. For myself, that is a great ideal to strive for.

Today, everyone is a publisher. I mean it literally: everyone who writes poetry, takes pictures, plays an instrument, argues a political point, or sings songs can record and publish their work to the world, whether or not their work is worthy and ready for publication. Musicians who are complete beginners publish. Junk writers publish. Even scientists sometimes publish without having their work thoroughly peer reviewed. The internet is a mirror of human culture. Human beings and the cultures we belong to are on varying intellectual, psychological, moral, spiritual, and social levels all at the same time, across the entire planet. It’s up to each one of us to become discerning. In fact we need to be more discerning than ever, for if ever human beings need to acquire and fine-tune our bullshit detectors it’s now, as we’re clearly in the Age of Propaganda: Social media and the internet are greatly complicating our relationship to truth, lies, and nonsense.

I’m reasonably sure that 40,000 years ago, when the first human picked up a bone, put some holes in it, and began to blow tones through it, there were those who became really good at it and those who did not. The same is true today. The medium is secondary; primary is the consciousness, ability, and mindfulness of human beings.

 

Earth Music is the title of the album that I will ask about for this feature. I like this title for a number of reasons—it sounds like something discovered from the future and something an alien might have uncovered from the Universe—but I wonder if you might describe how you work at completing an album. Do you write with a concept already in mind, or do you pull from older pieces that you may have previously written but not completed and organize them for future projects?

The process always begins with desire, the desire to express something in music. I decide what kind of pieces I want on the CD, usually one longer work and several shorter works. One of my CDs is comprised of 12 short vocal works, for voice and electronics. Many contain one symphonic work in four movements or a concerto featuring a live performer and a few short pieces. My earliest CDs did contain some older compositions that I originally wrote for live players but reworked for digital instruments. All of my later recordings contain music written specifically for that CD. Sometimes I’ll include a short work written for synthesizers only, with no orchestral samples.

 

I must confess that I really love hearing the more obviously computer-generated sounds (I call them beeps and bloops, but I assume there is a more technical term for these types of digital sounds) mixed with the sounds of strings and percussion instruments. It is very interesting to hear how those digital sounds blend with and almost mimic the sounds of the strings (as in the fourth movement, Passacaglia, of Symphony No. 10)—almost confronting the notion of perception, the artificial and man-made nature of these sounds that we perceive as “music.” Is that perhaps part of an underlying discussion for Earth Music—a confrontation with the listener’s perceptions of music?

Synthesizers can be used to imitate acoustic instruments (it’s better to use sample libraries for that purpose), or they can be used to create timbres that no acoustic instrument in the world can produce. I don’t know exactly what to call these sounds either; I suppose “synth timbres” is one term to use. Modern software synthesizers like Dune or Zebra are incredible musical instruments. The vast variety of timbres these instruments can produce is really impressive, and it’s enormously fun to create sounds from scratch (the default “scratch” sound is always an ordinary sine wave). From there we can substitute new and endlessly complex waveforms; add filtering effects; add low frequency modulation to the pitch, harmonic content, and amplitude; introduce arpeggiation so that one note becomes an entire rhythm section; design a complex envelope and apply that also to pitch, amplitude, or harmonic content. Modern software synthesizers also have a huge array of signal processing options such as delay, reverb, chorus effect, etc. When I design or choose a synth timbre I listen very carefully to what’s going on in the sound. From there I get clues as to how to proceed harmonically and rhythmically and integrate it into the composition.

I suppose that listening to my music requires a certain adjustment and openness on the part of the listener. All composers depend upon the listener to complete the artistic process. An open-minded, sensitive listener is certainly desirable, but never guaranteed. I’ve never thought of my music as confrontation with the listener, but now that you put it that way, I can see that this is a valid way of seeing the relationship between composer and listener. My hope is that the listener will experience, if even briefly, a transcendent emotion that everyday life doesn’t often evoke in us. If music can evoke the mysterious, sacred, and cosmic dimension of life, all the better.

 

I think that confrontation with our perception—our notion of what sounds can (or should) make music—is also present in the three piano works that conclude the album. In these pieces, I think it would be easy for a listener to assume these were played on a physical piano. But after listening closely, you realize that some of the rhythms and chords would be too difficult for one performer to execute. These pieces reminded me of looking at an Escher print—you accept the established pattern or the design of the building, but then realize the paradox within the structure. I wonder if you might speak a little more about some of the impossibilities within these pieces. Are there ever any limitations to what the computer will allow? In other words, although you may break rules regarding the physicality of a human performer, do you ever have to circumvent any computer parameters to achieve a particular sound or rhythmic structure?

Every instrument, digital or acoustic, has its limitations. The trick is always to emphasize the strong points of an instrument and downplay the weak points. The digital piano I use is a software program that is not a sampled or synthesized piano, but rather what’s called physical modeling—a piano sound is analyzed and broken down into many components and those components are recreated, or physically modeled, through computer algorithms and programming. It’s true that I can write chords that cannot be played by one human player, but I don’t do that as an end in itself, but rather because the composition calls for it. Instead of writing, say, for two acoustic pianists, I can write for one digital piano, and it can handle as many notes as I assign to it.

 

In your interview with Robert Schulslaper, you also mention the similarities between computers and instruments as man-made machines. I won’t ask you again to defend your chosen method for writing music, but I wonder why you are not more widely heard or why you tend to avoid a “live” performance (however you might imagine that). I very much understand the mentality of being akin to a poet or a painter and not seeking the attention of a large audience—but I think for the sake of eliminating some of the mystery, of bringing your method of delivering and creating this music down to earth (if you’ll forgive the pun), it would be really revolutionary—and disruptive, in the best sense—to the static energy of “classical” as a category of music. Do you have a sense that there might be an audience for your music? And even if that audience is only me, Robert Schulslaper, and yourself, what would you imagine for a live performance?

My website statistics reports that my site receives over 5,500 visits and over 900 listens a month, so I know people are definitely listening to my work. It’s not that I am not seeking a larger audience, I’m just not doing so through performance in the traditional sense. I am of the first generation of musicians able to produce fully orchestrated multi-timbral works from a studio without using 50-100 players. That is revolutionary in one sense, and yet I am just a small part in a great tradition, albeit a tradition, like all traditions, that must be open to change. Covid has made live performance very uncertain these days, concerts are constantly being postponed or cancelled, people are fearful or just cautious about being indoors with many people and a whole new uncertainty seems to pervade our social activities. It would probably not be a good time to pursue live performances, even if that were what I wanted most. I will have to be resourceful and figure out how to increase my audience without risking people’s health, at least for the foreseeable future.

 

What is your next project?

I’ve completed my Nine Hymns on Spiritual Life and am currently working on the third and last movement of a new oboe concerto.

home | recordings | compositions | press | services | instruction | articles | studio | biography | credits | links