Abstract
This paper presents a case study of the production of a virtual reality experience with object-based spatial audio rendering using audio post-production tools and workflows. An object-based production was created using a common digital audio workstation with real-time dynamic binaural sound rendering and visual monitoring of the scene on a head-mounted display. The Audio Definition Model is a standardised meta-data model for representing audio content including object-based, channel-based and scene-based spatial audio. Using the Audio Definition Model the object-based audio mix could be exported to a single WAV file. Plug-ins were built for a game engine in which the virtual reality application and the graphics were authored to allow import of the object-based audio mix and custom dynamic binaural rendering.
This paper was originally presented at the Audio Engineering Society Conference on Audio for Virtual and Augmented Reality, Sept 30–Oct 1 2016 in Los Angeles, CA, USA and is also available from the AES’s electronic library at http://www.aes.org/e-lib/browse.cfm?elib=18498
White Paper copyright
Β© Βι¶ΉΤΌΕΔ. All rights reserved. Except as provided below, no part of a White Paper may be reproduced in any material form (including photocopying or storing it in any medium by electronic means) without the prior written permission of Βι¶ΉΤΌΕΔ Research except in accordance with the provisions of the (UK) Copyright, Designs and Patents Act 1988.
The Βι¶ΉΤΌΕΔ grants permission to individuals and organisations to make copies of any White Paper as a complete document (including the copyright notice) for their own internal use. No copies may be published, distributed or made available to third parties whether by paper, electronic or other means without the Βι¶ΉΤΌΕΔ's prior written permission.