Excerpt from The Art of Theatrical Sound Design – A Practical Guide by Victoria Deiorio

Biology, Physics, and Psychology

When we recreate life or dream up a new version of our existence, as we do in theatre, we rely on the emotional response of an audience to further our intention of why we wanted to create it in the first place. Therefore, we must understand how what we do affects a human body sensorily. And in order to understand human reaction to sound we have to go back to the beginning of life on the planet and how hearing came to be.

The first eukaryotic life forms (a term that is characterized by organisms with well-defined cells) sensed vibration. Vibration sensitivity is one of the very first sensory systems that began with the externalization of proteins that created small mobile hairs that we now call cilia. This helped the earliest life forms to move around, and transformed them from passive to active organisms. The hairs moved in a way that created a sensory system that could detect the changes in the movement of the fluid in which they lived.

At first this was helpful because it ascertained the indication of predators or prey. Eventually an organism could detect its environment at a distance because of the vibration it felt through the surrounding fluid; essentially they were interpreting the vibration through the sense of touch on the cilia. To go from this simple system to the complexity of the human ear is quite an evolutionary jump that took billions of years to create, but we cannot deny the link.

Now if we look at the human ear, it is an organ that is equipped to perceive differences in atmospheric pressure caused by alternations in air pressure. The inner ear contains the cochlea, a snail shaped spiral structure that is filled with fluid to aid in balance and hearing. In the fluid-filled spiral there are mechanosensing organelles of hair cells about 10-50 micrometers in length. These hairs capture high and low frequencies depending on their placement within the cochlea (higher frequencies at the start, and lower frequencies further inward of the spiral). They convert mechanical and pressure stimuli into electric stimuli that then travel along via the cochlear aqueduct, which contains cerebrospinal fluid, to the subarachnoid space in the central nervous system.

Simply put, sound enters the ear; vibrations are detected in the fluid by cilia and are converted into perception at the brain. You may also want to take note that we use this same process of capturing vibrations and creating an electrical signal, to then be interpreted through different ‘brains’ of technology as our signal flow of the mechanical side of our work as sound designers. Our ears are the best example of organic signal flow.

Sound can be split between two aspects, physics and psychology. In physics we don’t remap every vibration on a one-to-one basis. That would be too overwhelming for us to interpret.

There are two innate conditions to our hearing.

  • Firstly, we have a specific range of frequencies within which we hear. We don’t hear the highest frequencies that a bat produces when flying around on a quiet rural evening. And a train rumble to us is not nearly as violent as it is for an elephant in a city zoo that uses infrasonic (extreme low end frequency) communication.
  • Secondly, from our perceptible frequencies we selectively choose what we want to hear so that the aural world around us is not overpowering. When new meaning is not being introduced, we selectively choose to filter out what we don’t need to hear.

This brings us to the aspect of psychology though perception and psychophysics, the study of the relationship between stimuli, the sensations, and the perceptions evoked by those stimuli.

Perception

The first step to the development and evolution of the mind is the activation of psychophysics in the brain. If you have the understanding of what you are experiencing around you, you can apply that understanding to learn and grow. This is how the mind evolves. And in our evolution our minds developed a complex way to interpret vibrations and frequency into meaning and emotion.

Everything in our world vibrates because where there is energy there is a vibratory region, some more complex than others but never totally silent. The brain seeks patterns and is continually identifying the correlation of sensation with perception of the energy that hits our bodies at a constant rate. When the brain interprets repetitive vibration, it creates the neural pathway of understanding around that vibration, which is why we can selectively block out sound we don’t need to hear, i.e. the hum of fluorescent lights or the whir of a computer. But when a random pattern occurs like a loud noise behind us, we immediately adjust our focus to comprehend what made that sound, where it is in juxtaposition to us, and question if we are in danger.

Hearing is a fast processing system. Not only can the hair cells in your ears pinpoint vibrations, and specific points of phase of a vibration (up to 5,000 times per second), it can also hear changes to that vibration 200 times per second at a perceptual level. It takes a thousandth of a second to perceive the vibration in the inner ear, a few milliseconds later the brain has determined where the sound is coming from, and it hits the auditory cortex where we perceive meaning of the sound in less than 50 milliseconds from when the vibration hit the outer ear.

The link between visual and sound is an important factor to our work as sound designers. At the auditory cortex we identify tone and comprehension of speech, but it is not just for sound. There are links between sound and vision in sections of the brain to help with comprehension. For example, the auditory cortex can also help with recognizing familiar faces that accompany familiar voices.

The more we learn about the science of the brain, the more physical interconnectedness we find between the sensory perception functions. This is why when sound in theatre matches the visual aspects of what is seen, it can be an incredibly satisfying experience. It is the recreation of life itself and when it is precisely repeated, it is as though we are actually living what is being presented in the production.

What is fascinating about the perception of hearing is that it does not solely occur in the auditory parts of the brain, it projects to the limbic system as well. The limbic system controls the physical functions of heart rate and blood pressure, but also cognitive function such as memory formation, attention span, and emotional response. It is why music can not only change your heart rate, but also bring you back in time to a specific moment that contained detailed emotions. This link to the limbic system is the aspect of cognitive hearing we use the most when placing music into a theatrical performance to support what is happening emotionally on stage.

Philosophy of How We Hear

As a sound designer, you need to know what the collective conscious experience is with sound. Any experience that human beings encounter can be reproduced on stage either in direct mirroring or a metaphoric symbol. Although this happens at the physical level of re-creation, often because it is a dramatized telling of a story, the reproduction lives within the metaphysical level of experience.

In order to understand the difference between the physical and metaphysical experience of sound, we must use Phenomenology (a philosophy which deals with consciousness, thought, and experience). When we are able to break apart the experience of hearing sound to master control over it, we can use it as a tool to affect others. You want to learn sound’s constitution in order to be able to control it.

Most theatregoers do not recognize the sound that accompanies a performance unless they are specifically guided to notice it. We provide the perception of depth of the moment-to-moment reality by filling in the gaps of where the visual leaves off. In order to recreate true to life experiences, we supply the aural world in which the event exists.

If we do not normally take note of how we process sound in our lives and its existence in our experiences, we will not be able to ascertain its presence in a theatrical production as an audience member. And for most people, sound in theatre will be imperceptible because their ears are not tuned to know how they perceive sound. We, as sound designers, want to use that to our advantage. But we can only do that if we completely understand how human beings process sound.

Manipulative Usage Because of Evolution

In a theatrical environment the audience is listening with goal-directed attention, which focuses the sensory and cognitive skill on a limited set of inputs. Therefore, we as sound designers have the ability to shape the sonic journey and create specific selected frequencies and attenuation by making the environment as complex or pure as we determine in our design.

We use stimulus-based attention when we want to capture awareness and redirect focus because certain sound elements create triggers due to the lack of previous neural path routing. We create sonic environments by bringing attention to only what we supply; there is nothing for an audience to filter out, it is all on purpose.

When a sudden loud sound happens it makes the audience jump, and now if we add to it a low frequency tone, the brain automatically starts creating subconscious comparisons because the input limitation is the only information the audience is receiving. We control what they hear and how they hear it.

Low frequencies have their own special response in a human being. There is an evolutionary reason as to why low frequencies immediately make the brain suspect there is danger nearby. Some have equated that hearing high amplitude infrasonic aspects of an animal growl immediately forces humans into a fight or flight response. More importantly though, loud infrasonic sound is not only heard, it is felt. It vibrates the entire body including the internal organs. Even with the subtlest use of low frequency, you can create unease internally.

One of the most powerful tools for a sound designer is the ability to create the feeling of apparent silence. This is relative to the sound experience preceding and following the silence. It has its own emotional response because we are normally subconsciously subjected to constant background noise. The absence of sound increases attention and it can increase the ear’s sensitivity.

The increase of attention because of silence has the same effect as the increase of attention from a sudden loud noise with one exception; the detection of the absence of sound is slower. Perhaps this comes from the ‘silence before the storm’ feeling of impending danger noting that something is wrong because something is missing. Or it may come from how in nature insects stop making noise when a predator is near.

It would be safe to assume that fear governs survival, making you either stay and fight, or run away. And this learned evolutionary behavior comes from the need to survive and has dictated the commonality of our reaction to this type of sound.

There is no specific region of the brain that governs positive complex emotions derived from sound, making it more complicated to understand what triggers it. Positive emotions come from behavioral development. What makes one person excited could be boring to the next because it is less of a reactive emotion than one that is built from experience.

There are universal sonic elements that imply that loudness equals intensity, low frequency equals power and large size, slowness equals inertia, and quickness equals imminence. These elements common to sonic communication are exactly what sound designers use to guide an audience’s response to a specific storytelling journey.

Complex emotional response comes from complex stimuli, and in theatre the cleanest way to produce positive complex responses is to have multisensory integration. In simple terms, if the sound matches the visual in timing, amplitude, and tone, it produces a gratifying reaction even if the emotion is negative. This is because it activates more regions of the brain.

But let’s be clear about our specific medium, the most powerful and common stimulus for emotional response is sound. And this resides mostly in our hearing and association of music.

Music

Music draws attention to itself, particularly in music that is not sung. The meaning lies in the sound of the music. We listen reflectively to wordless music. It enlivens the body because it plays upon a full range of self-presence. Music is felt in its rhythms and movements. The filling of auditory space equates to losing distance as you listen, and therefore creates the impression of penetration.

The sound of music is not the sound of unattended things that we experience in our day-to-day existence. It is constructed and therefore curiously different. Each piece of music, each genre or style is a new language to experience. Music comes to be from silence and it shows the space of silence as possibility. The tone waxes and wanes at the discretion of the musician, and composers can explore the differences between being and becoming, and actuality and potentiality.

In his book Listening and Voice: Phenomenologies of Sound, Don Ihde remarks; “The purity of music in its ecstatic surrounding presence overwhelms my ordinary connection with things so that I do not even primarily hear the symphony as the sounds of instruments. In the penetrating totality of the musical synthesis it is easy to forget the sound as the sound of the orchestra and the music floats through experience. Part of its enchantment is in obliteration of things.”

Music can be simply broken down into the idea that it is a score with instrumentation. It is a type of performance. And if we look towards how we respond to music, it can be found in the definition of each unique performance and its effect upon us. A score will tell a musician how to perform a piece of music, and they can play it note-perfect. But one performance can vary from another because of the particular approach to the music by the individuals involved in creating it.

Because one cannot properly consider a work of art without considering it as meaningful, then the art of music must then be defined as having meaning. To define it as meaningful, would lead you to think that it is possible to actually say what a specific piece of music means. Everything is describable. But if you put into words the meaning of music, it almost seems to trivialize it because you can’t seem to capture the entirety of its meaning.

What we can communicate when defining music is its characteristic. Most use emotion to do this. For example, the music can be sad. But is it the particular phrase in the music that is sad; or is it the orchestration; or is it a totality of the entire piece that creates the analogy? Or more specifically, can it be the experience of hearing the music that creates the feeling of sadness? But even this is ambiguous because if you find a definition about how the music affected you emotionally, in describing it you still have to find what about the expression was effective to everyone else who experienced it.

We can try to describe music in the sense of its animation. The properties of its movement correspond to the movement properties of natural expression. A sad person moves slowly, and perhaps this is somehow mirrored in the piece of music. The dynamic character of the music can resemble human movement. Yet this alone would also not fully explain what it is to experience sad music because human movement implies change of location and musical movement does not.

When music is accompanying an additional form of expression as it does many times in theatre, you can evoke the same two reactions from an audience. The first is that the audience empathizes with the experience; they feel the sadness that the performance is expressing (within). And the second is that they can have a sympathetic reaction; they feel an emotion in response to what is being expressed (without).

What is so dynamic to music is that it can function as a metaphor to emotional life. It can possess qualities of a psychological drama, it can raise a charge towards victory, it can struggle with impending danger, or it can even try to recover a past lost. And it can describe both emotional and psychological states of mind. It is almost as if the experience is beyond the music even though we are not separated from it. Our inner response may divert in our imagination, but it is the music that is guiding us. That is why the experience of listening to music is often described as transporting us.

Music is perhaps an ordered succession of thoughts that could be described as an harmonic and thematic plan by the composer. And in this idea we supply an application of dramatic technique to music in order to create a conversation between the player and the listener. We use words like climax and denouement with a unity of tone and action. We even use the term character. We develop concepts and use music as a discourse to express metaphor. But at times intellectual content is almost coincidental. We did not set up this conversation to be heard intellectually, it is intended to be felt emotionally. The composer and musician direct our aural perception as the listener.

Feeling Music

It seems that music should lend itself to a scientific explanation of mathematics, frequency, composition of tone, amplitude, and time; however, there is so much more to music than what can be studied. There have been interpretations of different intervals in music and the reaction they elicit in the listener. The main determination of emotional response comes from whether the interval contains consonance or dissonance.

Consonance is a combination of notes that are harmonic because of the relationship of their frequencies. Dissonance is the tension that is created from a lack of harmony in the relationship of their frequencies. The specific determination of these terms has changed throughout history with the exploration of new ways of playing music and remains very difficult to come to consensus over.

What is known scientifically is that the listener reacts with a different response to intervals that contain consonance rather than intervals that are dissonant. As this goes further into the meaning of perception when we take into account the different emotional responses for intervals that are in a major key or in a minor key. This is why we tend to think of major keys as happy and minor keys as sad. However, keep in mind that both major and minor keys can contain consonance and dissonance.

Instrumentation

Every musical instrument creates its own harmonics based on the material of which it is made and how it is played (struck, bowed, forced air, etc.). And these harmonics add to the perception of consonance and dissonance. This is the reason why a violin will produce a different emotional response than that of a tuba. Each instrument has its own emotional tone; and when combined, the harmonics mix together in a way that creates a complexity of different emotional states.

When you place that complexity into a major or minor key using either consonance or dissonance and vary the speed and volume, you achieve music. A great composer uses each instrument’s emotional tone, whether alone or in concert with other instruments, to convey what they want their audience to feel.

Rhythm

Rhythm can be defined as a temporal sequence imposed by metre and generated by musical movement. And yet not all music is governed by metres and bar lines. Arabic music by example composes its time into cycles that are asymmetrically added together. The division of time by music is what we recognize as rhythm. For a sound designer it is important to note that the other aspect of rhythm is not about time; it is the virtual energy that flows through the music. This causes a human response that allows us to move with it in sympathy.

Each culture depends on its own interpretation of rhythm. A Merengue will have a much different feeling than a Viennese waltz. The sociological and anthropological aspects to rhythm vary greatly, and rhythm is the lifeblood of how subgroups of people express themselves. It can be as broad as the rhythms of different countries, to the minuteness of different regions that live in close proximity. Rhythm is a form of expression and can be extremely particular depending on who is creating it and who is listening to it.

Association of Music

Music is one of the strongest associational tools we can use as sound designers.

One of the most often remarked upon association is the use of John Williams’ theme music to the movie Jaws. A single tuba sounding very much alone slowly plays a heartbeat pattern that speeds up until it’s accelerated to the point of danger.

This is the perfect example of how repetition and the change of pace in music can cause your heart rate to speed up or slow down. It is why we feel calm with slower pieces of music, and excited with more complex or uneven tempos. The low infrasonic resonance of the frequencies created by the tuba imparts the feeling of danger approaching. When matched with the visual aspect of what you are trying to convey, a woman swimming alone in the ocean at night, the association is a very powerful tool to generate mood, feeling, and emotion.

In sound design you also have the ability to practice mind control where you can transport your audience to a time and place they have never been, or to a specific time and place that supports the production. Have you ever had a moment when you are driving in a car and a song from when you were younger comes on the radio and you are instantly flooded with the emotions, sights, smells, and feelings of when that song first hit you and what you associated with it in the past?

When you’re done reliving these sense memories, you’ve still maintain the action of driving safely and continuing the activity you were doing. But briefly your mind was taken over momentarily while your brain still functioned fully. The song controlled you temporarily. Music has the ability to transform your surroundings by changing the state of your mind.

Non-Musical Sound

The putting together of sound that is not musical, can be composed, and evoke feeling as well. Sonic construction of non-musical phrasing pushes the boundaries of conventional musical sounds. And if music lies within the realm of sound, it can be said that even the active engagement of listening to compound non-musical sound can evoke an emotional response. It can be tranquil, humorous, captivating, or exciting. The structure and instrumentation of the composition is what will give the impression of metaphor.

We use non-musical sound a great deal in sound design as we are always looking for new ways of conveying meaning. And depending on the intention and thematic constructs of a piece of theatre, we create soundscapes from atypical instrumentation that are meant to evoke a feeling or mood when heard.

Technological Manipulation of Music and Sound

As far as the technology of amplification and manipulation of sound, I think of it this way: recorded music vs. playing live music is similar to printed words vs. the act of writing on a page. Music has been made universal with prolific genres and styles and it is pervasive in our society. And recorded music allows for the distribution of music globally.

At the beginning of recorded music, it lacked a purity of sound no matter how good the reproduction. It was recording the live playing without thought to the auditory focus, fields, or horizons. Now, we approach recorded music with intention knowing it is its own form of music that is separate from live music. The electronic elements of music no longer get in the way, they aid in creating a more dynamic production of music.

With a shift in musical technology, a deeper shift of insight and creativity can occur. There can be infinite flexibility and possibilities. Just as in the past, instruments needed to be invented, developed, played, and then tuned to create music. The same applies to technology. And although humans have been experiencing and producing music in diverse cultures since ancient times, this is but the next step in how we create music as artists.

Music’s language is based on who is exercising control. The composer, conductor, and musician exercises control over pitch, timbre, dynamics, attack, duration, and tempo. There are attributes to the voice of the instrument and to the space within which it is played. The control is a manipulation of sonic events and their relationship to each other.

Mixing engineers manipulate the rules that influence these relationships within a subtle to broad range. And those composers and musicians who implement virtual instruments into their writing have a wider range of rules than could be historically produced in the past.

The creative application of spatial and temporal rules can be aesthetically pleasing. When sound or music elicits an aural image of space in support of the visual aspects, like it can in theatre, space is not necessarily a real environment. And it can be exciting when differences in aural and visual space coexist simultaneously.

Let’s look at the manipulation of electric components of delivering sound. In the past, audio engineers would change, adjust, add, or remove components from their signal-processing algorithms. These would create ‘accidental’ changes in the sound. The engineers would refine their understanding of the relationship between the parameters and algorithms of their equipment with the sound that was produced.

Now, aural artists with audio engineers can create possibilities of imaginative sound fields. This highlights the interdependence between the artists, both aural and engineer, with science. No matter how abstract or technical the specifics of the sound presentation, each complex choice has real implications to how it will sound.

The mixing engineer can function as musician, composer, conductor, arranger, and aural architect. They can manipulate the music and the space wherein it lives and then distribute it to an audience. There are rarely any notations in musical compositions that contain specifics regarding spatial acoustics, and engineers have taken on the traditional responsibilities of the acoustic architect.

When an audio engineer designs an artificial reverberator to achieve other than natural reverberation in a space, the audio mixer then adjusts the parameters, and together they replace the acoustic architect who built the theatre. Physical naturalness becomes then an unnecessary constraint and it is replaced through intention by artistic meaningfulness.

There is an art to engineering and mixing that should be explored to its fullest. What is possible with technology and the manipulation of sound to create new aural landscapes can be stretched to the limits of imagination. All of it, done with the intended desired affect, can have a strong impression upon an audience.

Overall

As you can see, there is great depth to how human beings experience sound and how that perception gives way to meaning and emotion. The intricacies of how this happens exist mostly subconsciously. And when creating theatre art, whose purpose is to evoke deliberate emotional response, we must understand how to construct sound and music to support the intentions outlined for the production.

We take into account how the body interprets vibration, perceives properties, creates meaning, and responds emotionally to sound and music. The science is the grounding layer to what then becomes the metaphysical. We move then into psychology of the individual and group, and how that influences the perceived effect. Who we are internally and externally defines the response to sound around us. And the next added layer on top of science is the cultural aspect and anthropology to how we as individuals relate to others in our environments.

We, as sound designers, consider all of this in order to create richness of design, reality of place, illusion of metaphorical ideas, and emotional content. Successful artistry in sound design is both instinctual and researched. And it is based on human reaction.