Thanks to Deborah Skeldon, I was invited to interview in this jam packed women only issue of ‘Behind The Glass’ I talk about my audio career thus far, some projects I’ve been involved with. Link to the check out the full article is at the bottom of the page.
Throughout November 2020, I’ve been taking park in designing a sound by following a list of prompts to spark inspiration and posting these clips on Twitter and Instagram most days. It has been such a fun challenge and inspiring to see what other sound designers have been doing with these prompts! Follow me on twitter @sarahsherIock and @sarahsherlock.studio on instagram
‘Bulky’
A bulky robot goes for a walk…
‘Blade’
Brisk and light blade combat sounds in a re-design of FURY
‘Rodent’
Crafting sounds for a little rat animation.
‘Fancy’
A playful and casual sound re-design of mobile game Fancy Cats
‘Teeth’
Shark cage dive sound design.
‘Throw’
Throwing Knife, and impact sound.
‘Hope’
A re-design of the sidearm weapon ‘Last Hope’ from Destiny.
‘Disgusting’
A messy sound design session crafting some disgusting sounds. Put into context to a scene from Stranger Things
‘Buddy’
A re design for an iconic Jurassic Park scene, using sound from my best friends stomach!!
‘Radio’
For this prompt I created something a little more experimental. A juxtaposition of calm vs chaotic noise using field recordings.
Recently I decided to challenge myself in taking part in an international sound design contest ran by the brilliant asoundeffect.com
The challenge was to create audio for the short vfx clip by only using (This is where the fun begins! ) both synthesized and recorded sounds – the recorded ones must be recorded at home.
The sounds I recorded were a toilet flush and a can of adhesive/aerosol spraying and popping the cap off. By processing these sounds with effects. stretching them to match the animation and aesthetic of the visual. The result is what follows in this behind the scenes video showing the Individual sonic elements:
Recently I decided to challenge myself in taking part in an international sound design contest ran by the brilliant asoundeffect.com
The challenge was to create audio for the short vfx clip by only using (This is where the fun begins! ) both synthesized and recorded sounds – the recorded ones must be recorded at home.
The sounds I recorded were a toilet flush and a can of adhesive/aerosol spraying and popping the cap off. By processing these sounds with effects. stretching them to match the animation and aesthetic of the visual. The result is what follows in this behind the scenes video showing the Individual sonic elements:
An Introduction to Vitrual Reality, Pyschoacoustics and the demand on Audio
The push for compelling VR experiences is a current trend in the video game industry, and the demand for Audio is crucial for creating a persuasive VR experience. As the key role that audio cues play in our sense of being are presented in an actual, physical space. In the real world, Humans rely on psychoacoustics to localise sounds in three- dimensions.
‘For a long time, game audio has sought to accurately represent real world sound to support the truth or reality of the experience. To do this, games developers need some understanding of the physics of sound in the real world’ Stevens Raybould 2011
Localisation vs Spatialisation
The key components of localisation are direction and distance from the sound source, but many other factors exist such as timing, phase level, Impression of loudness, echo density, spaciousness, depth and size, motion and mobillity. How a sound source’s relation to its sonic environment, physical and psychoacoustical characterisation of the space (ie are the surfaces “dry ,“hard”,“soft” etc) refers to its Spatialisation. Spatialisation (has two possible meanings): a) the shaping of how a sound sources appears in a given space. b) creating the sonic environment in which sound sources reside.
None of these aspects are trivial or one-dimensional; so replication of these sounds in a tree-dimensional space needs high level conceptualisation or controls in order to create convincing impression of ‘space’. Virtual acoustics is a research field that aims to simulate sound propagation by borrowing notions from physics and graphics. Indeed, to gain more realistic simulations that can carry spatial information, we need to take another look at how sound propagates in an enclosed space and how we can simulate it.
Sound Propagation
As sound emanates from a source and travels through matter in waves. Your ears receive the sound directly (dry) or indirectly (wet) after it has passed through, or bounced off of various materials it comes into contact with. Typically you’d receive both of these types of sound and this gives you important information about the environment you are in. Stevens Raybould 2011
In figure 01 below you can see how a sound changes over time you can calculate the direction of sound reflections based on the position of various walls. If the position of a sound source and a listener is known, ray-based methods allow us to interactively position a sound, both in space and in time. To simulate the distance traveled by a sound, a delay is introduced based on the speed of sound in air.
Figure 01 courtesy of Brian Hamilton, member of the Acoustics and Audio group at the University of Edinburgh. We can see the propagation of a sound source over time using a ray-based method.
The above illustration shows just how complex these reflections can get and is only showing paths based on one fixed position from the sound source to the listener. Now Imagine, if this was a game scenario in VR. Due to player autonomy the player is going to want to move around the room, thus making the propagation of the reflections being heard from a different path or perspective depending on what point the player is at in the room. With this in mind, the role of positional and ‘3D’ audio is much bigger in VR. In most 3D games, environmental soundscapes tend to consist of positional 3D sounds that are placed at specific points in the world
As sound designers, we have established how humans place sounds in the world and, more importantly, how we can fool people into thinking that a sound is coming from a particular point in space, we need to examine how we must change our approach to sound design to support spatialization in VR applications. It must also allow for head movement to ensure that sound sources move realistically relative to the player.
Ambisonics and Head-Related Transfer Functions (HRTFs)
Ambisonics is a method of creating three-dimensional audio playback via a matrix of loudspeakers. It works by reproducing or synthesising a sound field in the way it would be experienced by a listener, with multiple sounds travelling in different directions. The basic approach of Ambisonics is to treat an audio scene as a full 360-degree sphere of sound coming from different directions around a centre point. The centre point is where the microphone is placed while recording, or where the listener’s ‘sweet spot’ is located while playing back.
The most popular Ambisonics format today, widely used in VR and 360 video, is a 4-channel format called Ambisonics B-format, which uses as few as four channels to reproduce a complete sphere of sound.
In order to replicate sound in 3D, a direction-selection filter can be encoded as a head-related transfer function (HRTF). The HRTF is the cornerstone for most modern 3D sound spatialization techniques. HRTFs by themselves may not be enough to localize a sound precisely, so we often rely on head motion to assist with localization. Simply turning our heads changes difficult front/back ambiguity problems into lateral localization problems that we are better equipped to solve.
Sounds at A and B are indistinguishable from each other based on level or time differences, since they are identical. By turning her head slightly, the listener alters the time and level differences between ears, helping to disambiguate the location of the sound. D1 is closer than D2, which is a cue to the listener that the sound is to now closer to the left, and therefore behind her.
The listener cocks her head, which results in D1 shortening and D2 lengthening. This helps the listener determine that the sound originated above her head instead of below it.
Images courtesy of Oculus.com
HRTFs help us determine the direction to a sound source, but they give relatively sparse cues for determining the distance to a sound. We use a combination of the following factors to determine distance:
Loudness
Initial time delay
Ratio of direct sound to reverberant sound
Motion parallax
High-frequency attenuation
The Problem
One of the Key functions and effects of sound in games is to immerse us in the virtual world through sense of sonic envelopment. The term envelopment in relation to spatial sound has been used to describe sometimes overlapping and contradictory sounds. Collins, 2013, defines the term envelopment as the sensation of being inside a physical space (enveloped by that sound) but most commonly this feeling is accomplished through the use of the subwoofer and bass frequencies, which create a physical, tangible presence for sound in a space.
The problem in 3D sound is occlusion – when an object or surface comes between the listener and the sound source.
In an interview about the future of game sound, Ben Minto, Senior audio director/sound designer at EA DICE has also discussed the issues of physics and sound replication in VR games. Minto questions if we want to be more “correct” in our replication of sound or more decodable (if a conflict exists)? Do we always want real world behavior? Does real always sound right?
For example: ‘Working in a built-up city I’m still surprised by how often physics gets it “wrong” when a helicopter flies overhead or an ambulance approaches from a distance. All the “conflicting” reflections from the buildings make it really hard for my brain to pinpoint where the sound is coming from, its path and also its direction of travel. Is this something we want to replicate in our games or do we want to bend the rules to make the scenarios more readable? ” (Minto 2017)
Conclusion
It is without doubt that as the virtual worlds of VR are expanding, the audio and appropriate recreation of sound propagation in video games is in high demand. Despite being often overlooked in the face of more attention-grabbing visuals, audio is an essential component to creating presence in VR. In a quest to create increasingly lifelike audio in VR environments, companies such as Oculus has pushed out systems to its Audio SDK recently that provides developers with the ability to create more realistic audio by generating real-time reverb and occlusion based on the app’s geometry. Now in beta, the so called ‘Audio Propagation’ tool comes in the Oculus Audio SDK 1.34 update which produces more accurate audio propagation with “minimal set up,” the company stated in a recent blog post.
Using HRTF’s is an obvious trend but only the disadvantage of this is that not everybody has the same shaped head and ears, meaning that one set of HRTFs will not suffice for an accurate sound reproduction for all players.
Collins, Karen, 2013: Playing With Sound, A Theory of Interacting with Sound and Music in Video Games, The MIT Press, England
Hartung, Klaus 1999: Comparisons of Different Methods For the Interpolation of Head-Related Transfer Functions AES 16th International Conference, Finland.
Scoring music for games relies on many techniques inherited from film scoring, including harmonic, dynamic and rhythmic development, cadences and themes. The main function of music in media is to support the emotion, although, video game music differs significantly from music found in linear media such as film and television. The time it takes to play a game depends on many different factors, including length of the story, game variability, and most importantly the experience of the player.
In video games, many contemporary composers use various interactive, music techniques to adapt to the player in real-time and progression within the music is created with gameplay, resulting in an adaptive and interactive soundtrack. For example, Action and Ambient tracks, found in nearly every video game genre from puzzle games to shooters. Ambient tracks set the emotional atmosphere during lower energy gameplay in which the player is free to explore and engage in safe activities. Action tracks stimulate the excitement level of the player during periods of heightened activity. In this situation the game communicates the intense emotions of the experience. Phillips, 2014 describes ‘action and ambient tracks are designed to enhance two diametrically opposed states of gameplay’ considering this term, one could say that the state of the music depends on the type of gameplay the player in engaged in, and how the music transitions between these states is according to predefined interactive variables. Although, more layers mean greater diversity in the adaptive music allowing for more emotional range moment-to-moment than just two states (Sweet, 2016)
In his book ‘Writing Interactive Music for Video Games’ Micheal Sweet also speaks of these variables and how synchronisation in the music is achieved by following changes in emotional context. These changes then effect how the music might play, but since a composer cannot write a customised score for each individual player, he or she may instead write an adaptable score that takes the player skill level and pacing into account. There are many other factors in a game that interactive music can take into account, including the players health, proximity to enemies, AI character states, and the length of the music. In turn, the composer must score these multiple paths with several music cues that are able to transform from one cue to another seamlessly.
One way to produce an adaptive music effect is with Phrase Branching. ie: Moving from track to track with logical parameters. (If enemies are greater than 2, then play the combat music, if not then stay looping calm or ambient music). Intros,Transitions and Outros are very useful in this scenario.
Phrase Branching Advantages
Most musical of all the horizontal re-sequencing techniques because it will never interrupt a musical phrase.
Ability to change tempo, harmony, instrumentation or melody in the next phrase based on a game event.
Phrase Branching Disadvantages
Non-immediate musical change because the music change will wait until the end of the current phrase which is dependent on the length of the phrases.
Can be more disruptive to the player in terms of musical changes than vertical remixing.
Infamous Second Son. An Example of Adaptive Music
Matching a game’s score and soundtrack to the player’s actions is, as SCEA’s Senior Music Manager Jonathan Mayer describes, “one of the biggest challenges working in games.” He’s confident Sucker Punch pulled it off. Infamous: Second Son (2014) is an open-world action game, set in Seattle, featuring central character Delsin Rowe, who has “attitude”. These facts inform everything that follows. Open world means lots of time spent exploring around, so the soundtrack has to be large and varied. The location setting is also something that should be considered, in this case the setting is in Seattle which conjures a certain musical style which they didn’t want the music to sound like. However, they did want to incorporate that influence into the score.
Within the industry these interactive scores tend to require a team of many people working together to integrate these systems, rather than the composer working alone. The audio team at Sucker Punch, working on Infamous Second Son worked directly with the composers and programmers on the team to fine tune their adaptive music system so that they had granular control of the music tension based on enemies’ awareness states in regards to the player. To pull this off they really had to play the game as much as possible and really understand what their users are going to experience. ‘The end result was that having such an integrated music team made the music much more impactful throughout the game.’ Mayer
Below is an example of a phrase branching adaptive music approach based on contact with enemies. As the player explores the city a low intensity sparse guitar theme can be heard, until the player begins combat at 4:10:53 this theme is reintroduced in full force with more prominent guitar parts with melodic and rhythmic development, heavier drums and bass making the experience more intense. Phrase branching is another horizontal re-sequencing technique which waits for the current musical phrase to end before playing the next musical cue (Sweet, 2016)
An example of adaptive music reacting to player health can be heard in the example below. When the player reaches critical health the sound effects fade out and the music continues in the background, however this time an ambiguous sustaining vocal drone sound is incorporated into the music until the player can either escape or in this case, dies and a musical stinger is played to end the end the transition until the player re-spawns.
LA Noire, Interactive Music used for Ludic Functions
Ludic music, is music that is somehow part of a game’s rules as opposed to just its narrative. Ludic music in games is typically congruent with the action and can heighten feelings of mastery by providing emotional rewards for player achievements (Stevens and Raybould, 2014)
In Rockstars detective game LA Noire (2011) the music provides important feedback to the player based on gameplay. In the example below the investigation music theme plays throughout while the player searches a crime scene. When the player comes into contact with a clue a short piano motif plays. Upon collecting all of the clues another motif is heard, this time the notes rising in pitch which is providing positive feedback and reward for finding all clues. Once the music completely stops this is informing the player that there is no more clues left to find in the area.
Another example from LA Noire is the music providing feedback based on player right or wrong choices in the Interrogation scenes. A sustaining drone underscores the interrogations and when the Player is promoted to choose wether the suspect is lying, telling the truth or doubtful, upon choosing the correct or wrong choice a different musical phrase is heard.
Choosing the correct answer, triggers a major phrase associated with positive feedback
Choosing wrong answer: Minor 7th, associated with negative feedback
Play Experience and Repetition Problem
The play experience in games is significantly longer than the experience of other linear media and as result players don’t usually finish games in one setting. This has direct implications on the music. The length of play and enjoyment level of the player also determines it’s replayability. If a player plays through a game multiple times, how does that effect the impact and function of the music on a subconscious level? As the primary function of music in video games is to create tension, the resolution of that tension amplifies the players euphoria when finishing a goal. Hypothetically speaking, If a composer scores a scary cue for a Horror game to underscore the player engaging a new enemy. During the first encounter the player is tense and on edge, because their subconscious mind momentarily struggles to categorise this new uneasy, dissonant sound. If the music and the situation are both the same the second time around, the impact is diminished. Before long, the subconscious mind makes a connection between the music and that event and results in filtering out the music, because the information no longer carries meaning.
It may never be practical or desirable for a video game score to provide completely new music for every single moment in the game. Composers should remember this rule and work with developers to push the boundaries of technology to allow for music that feels less repetitive.
References
Campbell. C , Behind the music of Infamous Second Son, 2014, Polygon
GDC Vault, Adaptive music, The secret lies within Music itself GDC Vault
Infamous: Second Son, Sucker Punch, Sony ,2014
LA Noire, Rockstar, 2011
Martin. G, The Seattle Sound of Infamous: Second Son, 2014, Paste Magazine
Phillips.W, A composers guide to Game Music, 2014, MIT Press
Stevens and Raybould, Designing a Game for Music, Oxford Handbook of Interactive Audio, 2014
Sweet.M, Writing Interactive Music for Video Games, 2015, Pearson
One of music’s most basic functions in film is to convey emotion to the audience. This blog post attempts to break down a scene and how we perceive the musics method within the scene, considering what does the music evoke in or communicate to us? In the book ‘Hearing Film’ Kasabian states “Music draws filmgoers into a film’s world measure by measure” and argues that music is film is “at least as significant as the visual and narrative components that have dominated film studies”
In the following clip below from Marvels, Captain America: The Winter Soldier (2014) is an example of redundant music, which is one of several music types used in film defined by Stam, Burgoyne and Flitterman-Lewis (1992)
Redundant Music, Refinforces the emotional tone
Contrapuntal Music, which runs counter to the dominant emotion
Empathetic Music, which conveys emotions of the characters
A-empathetic music, which seems different to the drama
Didactic contrapuntal, which uses music to distance the audience, in order to elicit a precise idea in the spectators mind
The music accompanying this scene has a strong emotional force in support of the images and narrative which see’s Steve Rogers visiting the Captain America museum and remembering his past. The story of Captain America is one of great heroism, but also loss and sacrifice and the score here reinforces that emotional tone.
Breaking Down The Scene
This piece is written in the Key of C Major, and modulating to its relative minor A minor. According to Schubars Affective Key Characteristics the key of C is considered as ‘having qualities completely pure. Its character is: innocence, simplicity, naivety, children’s talk’ and the key of A minor elicits a ‘tenderness of character’
The scene starts and the cue begins with sustained high violin lines, and timpani rolls creating an ambiguous feeling for the listener. Rona (2000) refers to beginning cues as “making a subtle entrance. If done right, the audience won’t notice the cue starting or stopping, but they will get the music’s full impact when needed”. As the camera pans away from the war ships we hear the first variation of the leitmotif played by the oboe’s and flute
As the camera pans on the Captain America imagery and the museum voiceover begins at 0.27 “A symbol to the Nation” we hear the leitmotif on the french horns now with a variation consisting of larger intervals of a major 5th to the octave and back again, a military snare drum is used in response to french horns. Both instruments are associated with military and war.
The camera focuses on a shadowed figure from behind and It isn’t until we see that it is Steve hiding his identity at 0.43 walking in front the large captain america image, that we hear the leitmotif with harmony in full on the brass and strings when his face is revealed.
A cluster on the woodwinds at 0.56 as the camera focuses on the children. High register woodwinds suggests an innocence and childlike quality. The use of the woodwinds here are considered bright playful colour and is bringing the attention of the viewer to the children in the museum.
At 1.00 is an important moment. Among the children mentioned above, a young boy in a captain america shirt recognises Steve and this is where we first here a delicate solo violin motif. Steve smiles at the boy and holds his finger to his lips, the boy nods in response. The function of the solo violin here could be considered as recognising the unsaid bond between this boy and captain america.
The music takes on a more underscore approach in the scenes following on from 1.10, this is more than likely result of scoring under the narration dialogue
The leitmotif returns again on the trumpet this time when the camera goes to Steves childhood friend and war hero ‘Bucky Barns’ 1.29 this time with a slight variation in the timing that feels like the first two notes are shorter and last note held for longer with strings sustaining as a texture underneath, creating a sense of anticipation and or/ sadness.
At 1.45 another theme variation is played, not so much the leitmotif but more so acting as a transition into the next scene which cuts to Steve watching the interview of his old love. The music then returns to underscore to close the cue.
Listen to ‘The Smithsonian” from Captain America: The Winter Soilder OST, by Henry Jackman
References
Captain America, The Winter Soldier (2014) Marvel
Davis, Richard (2010) Complete Guide to Film Scoring (Berklee Guide) Berkley, MA
Green, Jessica (2010) Understanding the Score: Film Music Communicating to and Influencing the Audience, The Journal of Aesthetic Education, Illinois,
Kasabian, Anahid (2001) Hearing Film, Tracking Identifications in contemporary Hollywood Film Music, New York
Stam, Robert. Burgoyne, Robert and Flitterman-Lewis, Sandy (1992) New Vocabularies in Film Semiotics, New York,
This blog post discusses ‘Procedural Audio’ and the use of ‘Procedural Sound Design’ in video games.
‘Procedural Audio’ is a term first popularised by Andy Farnell, where sounds produced are the result of a process ie: using a system to synthesise notes themselves using software such as Max to generate sound of purely digital tones. This type of Procedural audio does not require pre-recorded sound effects and allows the sound designer to use the audio engine to randomise variables such as: pitch, wave type, note length, vibrato, distortion, and various filters.
In relation to game audio the term Procedural Sound Design (or PSD) is used to describe a system set in place where sounds triggered are comprised of multiple elements and layers. R. Stevens refers to procedural sound design as a system, an algorithm, or a procedure that re-arranges, combines, or manipulated sound assets so they might:
a. Produce a greater variety of outcomes
b. Be more responsive to interaction
When we think of a typical sound effect that needs variation such as footsteps on different terrain this can be easily implemented using a random generator of sound files to be triggered when a player walks. However, for more environmental sounds that react within the sonic world of a game, these can be broken down to their core elements and layered at random. Much like when sounds are created using synthesis, if we consider the ADSR of a particular sound (attack, decay, sustain, release) such as an Explosion being comprised of an initial crack, followed by a boom and reverberant tail, by layering these sounds and setting up a system that randomises playback we create a much more realistic and immersive game experience.
Paul Weir, In speaking about procedural sound and his work on No Mans Sky Weir defines PSD as ‘it involves real-time synthesis that is live and interactive, controlled by data coming back from the game systems. According to the suggested definition above, this only makes sense if it solves a problem for you that would otherwise be difficult to resolve using conventional sound design’
So under these definitions, as soon as you set up any kind of system of playback you could see it as being procedural audio. In video game sound design, by controlling the repetition, randomisation and frequencies of sounds which are crucial to the quality of game audio, this aims to combat player ‘listener fatigue’ which refers to a phenomenon that occurs after prolonged exposure to an auditory stimulus.
Examples of Procedural Sound Design in Games
No Mans Sky (2016, Hello Games) a game that was created using many random variables on the generative and procedural approach to game design, sound designer Paul Weir created a system called Vocalien, a real time synthesis plug inside the game that results in an always different voice for each creature and doesn’t impact the memory of the game. Weir states that the advantage of this approach is that you can relatively easily construct an expandable infrastructure into which you can add layers of sound design that respond to the game state of environment.
Paul Weir demonstrates Vocalien on an iPad
Creature sounds in No Mans Sky
Heavenly Sword ( 2007, Sony) An example of a game where the procedural approach to sound design would result in more memory. Individual Sword Whooses and dialogue sounds recorded with change in reverb lengths. Now we can create systems to trigger the sounds and don’t require so many assists.
GTA: V (2013, Rockstar Games) Footsteps, Gunshots, Impacts
Conclusion
In most games we are sensitive to the repetitions that are not in real life. The obvious answer is to have loads and loads of sounds but it would be far too much work to create 30-40 versions of every sound in the game. The procedural approach to sound design in games is effective for the following reasons: memory constraints of the game, when there is too much content to create and need variations of the same sounds.When the sound changes depending on the game context (environment)
Some procedural sound design models can be implemented very easily:
Footsteps
Impacts
Air / Water / Wind
Here is an example of a computer sound made of individual one shots that loops within itself at random every time it is triggered in the game. By creating a loopaple sound system in Unreal Engine 4 this gives the sound designer a lot of advantages compared to just creating static sounds, repeatable loops or large files using mass amount of in game memory. Systems like these can now add to the soundscape of an in game environment, that feels dynamic and believable, causing the player to achieve complete immersion
In Naughty Dogs 2013 hit game The Last of Us therein lies a tale of horror, lost humanity and lingering hope. A game that is critically acclaimed for its compelling narrative and a perfect demonstration of Immersion in effect. Immersion is a term used to describe a deep mental involvement in something. According to the definition of immersion in English, Oxford Dictionary is to Immerse someone or something in liquid”. Originating in Late 15th century: from late Latin immersio(n-), from immergere ‘dip into’. Phillips, speaks about Immersion in relation to video games occurring ‘when the gamer loses consciousness of the methods of perception and interaction in the game’
In The Last of Us, composer Gustavo Santaolalla created a minimalist style score with dissonance and resonance telling an emotional story itself at the heart of the game. It is a beautiful, haunting, sonic landscape that makes the game’s themes more emotionally involving and emphasises how Joel, Ellie, and all the characters in the game perceive the world around them. Critical reception to the soundtrack was positive. Reviewers felt that the music connected appropriately with the gameplay and acted as the perfect companion (Kerr, 2014 soundtrack review)
Instrumentation
Instrumentation choices in video game music are vital in setting the right mood and feel to enhance the gaming experience, allowing total immersion to occur.Phillips states that “a different instrumental arrangement can alter a theme in ways ranging from subtle to radical. Changing the instrumentation is the simpliest of all thematic alterations but can be highly effective”
For the Last of Us being a dystopian horror game at it’s core. Gustavo composes music using conventional instruments in unconventional ways. He tunes an electric guitar down an entire major third (the lowest string becomes a C instead of an E). On that guitar, there’s also a resonator, much like you’d expect on a Dobro guitar. The resonator combined with the loose strings creates an unusually dark and vibrant sound, especially when he bows it with a violin bow to produce unique sounds. He also recorded in various rooms, including a bathroom and kitchen to capture the acoustic detail of the guitars.
The use of the Leitmotif for character development
A leitmotif (from German, meaning Leading Motive) in music is a short musical phrase that is repeated throughout a musical composition and according to Phillips and Davis is a ‘musical theme that accompanies a specific element in the dramatic works in which it appears. This may be a character, location, or a unique situation in the Plot’ Phillips also goes on to quote a research paper from Mediated Perspectives: Journal of the New Media Dr. Joseph Defazio (2006) writes, “Composers in the game genre demonstrate the effective use of the leitmotif in their work” The Last of Us is extremely effective in it how the leitmotif is used to reinforce and remind the player of crucial concepts throughout the story development. The opening main menu theme consists of a three note leitmotif that sets the overall tone of the game. This theme itself is sparse, played on high strings and has little rhythmic development at this point, acting as a sonic soundscape for the journey the player is about to undertake. see Last of Us Main Menu Theme
This opening main menu leitmotif is what is heard at vital parts in the game as the story progresses. During the prologue mission, this theme is first heard within the game during a cutscene where the death of Joels daughter Sarah occurs. This is considered a cornerstone moment for the main character Joel. see Last of Us Sarah’s Death Theme
The leitmotif again returning sometimes as a subtle element, but during other times with more thematic development. The “Sarah” theme is reused as Ellie and Joel’s relationship develops, he becomes a father figure to her. The leitmotif, now with added string parts, making this a fuller and overall more developed theme plays a central arch in the story and is heard nearer the ending of the game. see Last of Us All Gone
Extra: In 2015, during her undergraduate studies at Pulse College, CMPG (certificate in music production for games), The author of this blog was challenged to rescore a scene of from The Last of Us. The technique includes using the instrumentation from the original game as a pastiche but also scoring action scenes with the use of some slight mickey-mousing (refers to a style of scoring where the music mimics every action, as in the early Mickey Mouse cartoons – (Davis, 2010) The aim was to score the scene while also being mindful of composing music under the dialogue. See: Last of Scene rescored by Sarah Sherlock
References
Davis, R (2010) Complete Guide to Film Scoring. Berklee Press, Boston,MA
Kerr, Chris (2014) The Last of Us, Soundtrack review. Available online from http://www.sideone.co.uk
My name is Sarah Sherlock, I am a Musician, Composer & Sound Designer from Dublin, Ireland.
My background as a musician is as a guitarist. From an early age I picked up the guitar and found I had an ear for music. I played in many bands growing up, in pop-rock and punk bands. I also self thought myself to play bass, keys and some casual drumming! As the years went by I decided to take my guitar playing more seriously and have since been undergoing grade examinations in electric guitar techniques & performance awarded by London College of Music. My current award is Grade 5.
Throughout my whole life I have always been interested in music, film & video games, and as I started to experiment with producing and recording my own music I found my passion in composing music for visual media.
I studied for 3 years at Pulse College, located in the infamous Windmill Lane Recording Studio in Dublin. I started on the Music production for Games programme where I studied sound design, music and image, composing, music technology and interactive storytelling. In the sound design module I done some game audio implementation using audio middleware Unity and Fmod. This was a one year higher certificate course to which I passed with Distinction.
Upon finishing this course I still had the desire to continue learning and was transferred into year two of their Music Production degree programme. Alongside studio and live recording, this programme developed skills in critical listening and audio analysis, investigations of form and aural training, and music theory.
As of 2017 I have graduated with a 2nd class honours Music Production Degree. My final year dissertation project was an exploration of horror film composers, which I scored Frankenstein from 1931 and I was lucky to go on to present this project at Horror Expo Ireland in October 2017.
During my studies as my confidence started to grow, I started doing some freelance work to build up my portfolio. Some of the projects I have been lucky to be involved with include scoring a short horror film and web series, composing music for a podcast, creating sound design and assisting VO recording sessions for a mobile vr game.
I love anything creative, and bringing a project to life with music & sound is why I love what I do! My goals going forward is to fine tune my “unique sound” as a composer, and to learn the audio implementation skills that are required in the game audio industry.