Skip to main content

Matthew James Part


Abstract

The use of spatialized sound design in Virtual Reality projects is contributing to a new sonic lexicon. The evolution of the perceived sound-field in screen-based media presents opportunities for alternative approaches to the environmental presence and narrative engagement for audiences. Cinematic VR (CVR) projects have the opportunity to experiment with and develop the relationships between the sound stages of dialogue, ambiences, effects and music, in a way not previously possible with traditional film, resulting in a different approach to the handling of diegesis within their sonic worlds. This can be seen in films such as “Under the Canopy” (Conservation International, 2017), a documentary that explores the Amazon rainforest and uses localization techniques more commonly found in videogames, an “acousmêtre” (Chion, 1999) of a disembodied voice to direct the audience’s attention. As our media landscape is evolving, so too are the techniques and terminology we use to express notions within it. As filmmakers begin to explore this new narrative direction, a framework needs to be developed for all areas of the production workflow and the interplay between the technological elements of the process must be taken into consideration from the beginning of the production timeline, as not all narratives will be as freely adaptable within the medium. This research focuses on the tools and techniques that can be garnered from these new platforms and addresses the auditory language that is being developed by sound designers for media within three-dimensional space and how this language could be translated into other formats. Using examples from both VR and non-VR based films, this paper aims to decipher the emerging dialect and how filmmakers are using this language to engage an audience in the world and narrative of a VR film.