NIME 2018 Workshop, June 3rd, 2018 at Virginia Tech, Blacksburg, VA
We are rapidly approaching the mass adoption of virtual reality as a consumer-oriented entertainment medium. Recent breakthroughs in low-persistence displays combined with modern head-tracking techniques have enabled the design of head-mounted systems that offer plausible virtual experiences. The role of sound in establishing a convincing sense of immersion in such experiences has been acknowledged since the early days of VR. Morton Heilig’s 1960 patent for a stereoscopic television apparatus, which is one of the earliest concept designs for a VR system, describes the use of ear phones and binaural sound. Ivan Sutherland, who designed what is considered to be the first head-mounted display, wrote in 1965 that despite the availability of excellent audio output devices, the computers had not yet been capable of producing meaningful sounds that can be integrated into “the ultimate display.” Since then, significant advances in digital computing has facilitated the development of real-time audio synthesis and spatialization techniques. Modern consumer-grade computers are capable of executing complex sound-field synthesis algorithms that are suitable for VR applications.
Leading technology companies today are growingly becoming invested in VR audio with new products, such as Facebook’s Spatial Workstation and Google’s Resonance Audio, that facilitate the use of Ambisonics and binaural audio with popular VR platforms. However, most modern VR systems impart an ancillary role to audio, where sounds serve to amplify visual immersion. It is up to sound and music researchers to elaborate ways in which we can think natively about VR audio, and define the role of sound in the ultimate displays of the future. This workshop aims to investigate the concept of an “audio-first VR” as a medium for musical expression, and identify multimodal experiences that focus on the auditory aspects of immersion rather than those that are ocular. Through a day-long workshop that involves presentations and demo sessions followed by round table discussions, the participants will collectively address such questions as: What is a VR experience that originates from sonic phenomena? How do we act and interact within such experiences? What are the tools that we have or need to create these experiences? What are the implications of an audio-first VR for the future of musical expression? As a result of the workshop, the participants will collaboratively outline a position on what constitutes an audio-first VR from a NIME perspective.
We seek abstract submissions that focus on the following topics among others:
09:00 - 09:20 | Welcome and Introductions |
Anıl Çamcı and Rob Hamilton | |
09:20 - 09:40 | ECOSONICO: Augmenting Sound and Defining Soundscapes in a Local Interactive Space |
José M. Mondragón, Adalberto Blanco, and Francisco Rivas | |
09:40 - 10:00 | Sonic Thinking in VR: Incorporating Sound into S.T.E.A.M Curriculum and Data-Driven Installations |
Melissa F. Clarke and Margaret Schedel | |
10:00 - 10:20 | On Standardization, Reproducibility, and Other Demons (of VR) |
Davide Andrea Mauro | |
10:20 - 10:40 | Chunity for Audio-first VR |
Jack Atherton and Ge Wang | |
10:40 - 11:00 | Sonic Cyborg Feminist Futures in Extended Realities |
Rachel Rome | |
11:00 - 11:20 | Adapting 3D Selection and Manipulation Techniques for Immersive Musical Interaction |
Florent Berthaut | |
11:20 - 11:40 | What Postmodal Processes Can Teach Us about Existing Mediums |
Josh Simmons | |
11:40 - 12:00 | Enhanced Virtual Reality (EVR) for Live Concert Experiences |
Chase Mitchusson and Edgar Berdahl | |
12:00 - 13:00 | Lunch Break |
13:00 - 14:00 | Demo Sessions |
14:00 - 16:00 | Group Discussion; Outlining a Position on Audio-first VR |
Rob Hamilton is a composer and researcher, who explores the converging spaces between sound, music, and interaction. His creative practice includes mixed-reality performance works built within fully rendered, networked game environments, procedural music engines and mobile musical ecosystems. His research focuses on the cognitive implications of sonified musical gesture and motion and the role of perceived space in the creation and enjoyment of sound and music. Dr. Hamilton received his PhD from Stanford University’s Center for Computer Research in Music and Acoustics (CCRMA) and currently serves as an Assistant Professor of Music and Media at Rensselaer Polytechnic Institute. [http://robhamilton.io]