Audio-first VR
Imagining the Future of Virtual Environments for Musical Expression

NIME 2018 Workshop, June 3rd, 2018 at Virginia Tech, Blacksburg, VA

We are rapidly approaching the mass adoption of virtual reality as a consumer-oriented entertainment medium. Recent breakthroughs in low-persistence displays combined with modern head-tracking techniques have enabled the design of head-mounted systems that offer plausible virtual experiences. The role of sound in establishing a convincing sense of immersion in such experiences has been acknowledged since the early days of VR. Morton Heilig’s 1960 patent for a stereoscopic television apparatus, which is one of the earliest concept designs for a VR system, describes the use of ear phones and binaural sound. Ivan Sutherland, who designed what is considered to be the first head-mounted display, wrote in 1965 that despite the availability of excellent audio output devices, the computers had not yet been capable of producing meaningful sounds that can be integrated into “the ultimate display.” Since then, significant advances in digital computing has facilitated the development of real-time audio synthesis and spatialization techniques. Modern consumer-grade computers are capable of executing complex sound-field synthesis algorithms that are suitable for VR applications.

Leading technology companies today are growingly becoming invested in VR audio with new products, such as Facebook’s Spatial Workstation and Google’s Resonance Audio, that facilitate the use of Ambisonics and binaural audio with popular VR platforms. However, most modern VR systems impart an ancillary role to audio, where sounds serve to amplify visual immersion. It is up to sound and music researchers to elaborate ways in which we can think natively about VR audio, and define the role of sound in the ultimate displays of the future. This workshop aims to investigate the concept of an “audio-first VR” as a medium for musical expression, and identify multimodal experiences that focus on the auditory aspects of immersion rather than those that are ocular. Through a day-long workshop that involves presentations and demo sessions followed by round table discussions, the participants will collectively address such questions as: What is a VR experience that originates from sonic phenomena? How do we act and interact within such experiences? What are the tools that we have or need to create these experiences? What are the implications of an audio-first VR for the future of musical expression? As a result of the workshop, the participants will collaboratively outline a position on what constitutes an audio-first VR from a NIME perspective.

We seek abstract submissions that focus on the following topics among others:

  • Sonic virtual realities
  • Immersive sonification
  • Virtual interfaces for musical expression (VIMEs)
  • Creativity support tools for VR audio
  • Visualizing an audio-first VR
  • Spatial audio techniques for VR
  • Embodied interactions within virtual audio systems
  • Applications of computer music theory to VR
  • Composing music for VR games and films
  • Sonic VR as assistive technology
  • Histories of sound in virtual environments

    Abstracts of approximately 350 words should be sent as a PDF file by May 6th to acamci@umich.edu with "NIME Workshop on Audio-first VR" included in the subject line. The accepted abstracts and the proceedings of the workshop will be published on the workshop website at audio1stVR.github.io. Number of presenting participants will be limited to 10. However, attendance will be open to non-presenting participants within space limits. If you are already presenting a relevant paper or artwork at NIME, you are welcome to submit an abstract in relation to that work to participate in the workshop.
    Workshop Schedule
    09:00 - 09:20 Welcome and Introductions
    Anıl Çamcı and Rob Hamilton
    09:20 - 09:40 ECOSONICO: Augmenting Sound and Defining Soundscapes in a Local Interactive Space
    José M. Mondragón, Adalberto Blanco, and Francisco Rivas
    09:40 - 10:00 Sonic Thinking in VR: Incorporating Sound into S.T.E.A.M Curriculum and Data-Driven Installations
    Melissa F. Clarke and Margaret Schedel
    10:00 - 10:20 On Standardization, Reproducibility, and Other Demons (of VR)
    Davide Andrea Mauro
    10:20 - 10:40 Chunity for Audio-first VR
    Jack Atherton and Ge Wang
    10:40 - 11:00 Sonic Cyborg Feminist Futures in Extended Realities
    Rachel Rome
    11:00 - 11:20 Adapting 3D Selection and Manipulation Techniques for Immersive Musical Interaction
    Florent Berthaut
    11:20 - 11:40 What Postmodal Processes Can Teach Us about Existing Mediums
    Josh Simmons
    11:40 - 12:00 Enhanced Virtual Reality (EVR) for Live Concert Experiences
    Chase Mitchusson and Edgar Berdahl
    12:00 - 13:00 Lunch Break
    13:00 - 14:00 Demo Sessions
    14:00 - 16:00 Group Discussion; Outlining a Position on Audio-first VR
    Workshop Leaders
    Anıl Çamcı is an Assistant Professor of Performing Arts Technology at the University of Michigan. His work investigates new tools and theories for multimodal worldmaking using a variety of media ranging from electronic music to virtual reality. Previously, he worked at the University of Illinois at Chicago, where he led research projects on human-computer interaction and immersive audio in virtual reality contexts, and Istanbul Technical University, where he founded the Sonic Arts Program. He completed his PhD at Leiden University in affiliation with the Institute of Sonology in The Hague, and the Industrial Design Department at Delft University of Technology. Çamcı’s research has been featured in leading journals and conferences throughout the world. He has been granted several awards, including the Audio Engineering Society Fellowship, and the ACM CHI Artist Grant. [http://anilcamci.com]

    Rob Hamilton is a composer and researcher, who explores the converging spaces between sound, music, and interaction. His creative practice includes mixed-reality performance works built within fully rendered, networked game environments, procedural music engines and mobile musical ecosystems. His research focuses on the cognitive implications of sonified musical gesture and motion and the role of perceived space in the creation and enjoyment of sound and music. Dr. Hamilton received his PhD from Stanford University’s Center for Computer Research in Music and Acoustics (CCRMA) and currently serves as an Assistant Professor of Music and Media at Rensselaer Polytechnic Institute. [http://robhamilton.io]