Βι¶ΉΤΌΕΔ

Research & Development

Posted by Jon Francombe, Kristian Hentschel on , last updated

Just over a year ago, the Research & Development audio team in collaboration with the launched The Vostok-K Incident, a short science-fiction audio drama. The piece works on its own as a stereo production but also lets users connect personal devices as additional loudspeakers. Parts of the sound scene (as well as additional content) are automatically routed to these extra devices. By doing this, we can flexibly create immersive spatial audio experiences without being "locked-in" to particular loudspeaker layouts or technology providers.

We'd been investigating this idea of device orchestration for a while, publishing a a the user experience in a demo system. We decided to commission and create The Vostok-K Incident to develop the production process and also see how the idea of using all available devices played out in the real world.

Last year, we summarised the production process for The Vostok-K Incident. Since then, we've published three papers describing the production, delivery, and evaluation in more detail. These are now all freely available as Βι¶ΉΤΌΕΔ R&D White Papers and below is a summary of each paper, as well as giving a brief update on our next steps. Additionally, two more papers on research related to device orchestration (output from a PhD project that we support) have also been made available.

Producing audio drama content for an array of orchestrated personal devices

| Βι¶ΉΤΌΕΔ R&D White Paper 359

In October 2018, Jon presented an engineering brief at the in New York. Audio content that can adapt to an unknown number of connected devices spread out in unknown positions in the room is a significant departure from the norm of assuming a standard layout of stereo loudspeakers. Consequently, the production process was involved and time-consuming. In this paper, we outlined the bespoke production environment that was set up at Βι¶ΉΤΌΕΔ R&D for creating The Vostok-K Incident. We introduced the ruleset that was used to decide which parts of the audio scene should be sent to each connected device, and then reviewed some of the challenges of writing, recording, and producing this kind of content.

Also at the Audio Engineering Society Convention

In addition to the work on device orchestration happening in the audio team, we support a PhD project at the . Craig Cieciura is investigating the development of , and he presented two engineering briefs at the AES in New York. The first (, republished as Βι¶ΉΤΌΕΔ R&D White Paper 363) describes a survey looking into the types of loudspeaker devices that listeners have at home as well as how they consume media. The results showed that there is significant ownership of wireless and smart loudspeakers, and a low proportion of listeners have surround sound systems. The second (, republished as Βι¶ΉΤΌΕΔ R&D White Paper 364) describes a library of audio and audio-visual object-based content created for use in experiments investigating device orchestration.

Plenty more work by the R&D audio team was represented at the convention. Lauren Ward presented an engineering brief about using audio for delivering accessible mixes. Her narrative importance control works by allowing the producer to tag different sounds with their importance level, then adjusting the object volumes according to whether the listener wants an immersive experience or a clear narrative. This work has since been implemented in a trial with Βι¶ΉΤΌΕΔ One's hospital drama series, Casualty. Additionally, Jon chaired a panel of industry and academic experts discussing best practice for recruiting and training participants for listening experiments and was also on a panel talking about on .

Framework for web delivery of immersive experiences using device orchestration

| Βι¶ΉΤΌΕΔ R&D White Paper 346

Once The Vostok-K Incident had been produced, we needed a way to deliver it over the internet. Kristian's paper at (closer to home in the building next door to our Salford office) summarised the web delivery framework that he built. The short paper was written to accompany a demonstration and starts by placing our work in the context of previous orchestrated experiences. It goes on to introduce the framework, including audio processing, delivery, and routing; synchronisation of devices (); and user interface considerations.

Other Βι¶ΉΤΌΕΔ R&D involvement at TVX included a co-chaired by our audio team lead Chris Pike and a evaluating the Living Room of the Future.

Evaluation of an immersive audio experience using questionnaire and interaction data

| Βι¶ΉΤΌΕΔ R&D White Paper 352

Finally, after delivering The Vostok-K Incident on Βι¶ΉΤΌΕΔ Taster, we wanted to evaluate it. One way of doing this is to look at how people interacted with the trial. This was the subject of a paper that Jon presented in September 2019 at the in Aachen, Germany. We used interaction logs to look at the time that listeners spent with The Vostok-K Incident, the number of extra devices they connected, and how they used the different options on the interface. We also analysed the results of a short questionnaire that listeners could complete after the experience. The results suggested that there is value in this approach to delivering audio drama: 79% of respondents loved or liked using phones as speakers, and 85% would use the technology again. The interaction results gave helpful pointers for future experiences, suggesting that we should aim to get the greatest possible benefit out of few connected devices, ensure that content is impactful right from the start, and explore different types of user interaction.

The paper also discussed the pros and cons of using a large scale public trial for evaluation, concluding that it is a useful technique that should be performed alongside more carefully controlled user testing. We have subsequently performed such a user test, conducting a small number of telephone interviews with listeners. The early results from these look useful and will be published in due course.

What's next for our device orchestration research?

After evaluating The Vostok-K Incident, we designed and delivered several workshops aimed to better understand audience and producer requirements for the device orchestration technology. These workshops are ongoing, and we're aiming to write up our findings towards the end of 2019. Alongside this, we've been working on a software tool that makes it much easier to create prototype orchestrated experiences. Watch this space for more information! We're also involved in longer-term research projects through industrial supervision of PhD students. Craig's project detailed above is ongoing, and we're shortly embarking on another PhD project at the University of York (as part of the ), looking into the creative affordances of device orchestration.

-

Βι¶ΉΤΌΕΔ R&D - Vostok K Incident - How we made the Audio Orchestrator - and how you can use it too

Βι¶ΉΤΌΕΔ MakerBox - Audio Orchestrator

Βι¶ΉΤΌΕΔ R&D - Vostok K Incident - Immersive Spatial Sound Using Personal Audio Devices

Βι¶ΉΤΌΕΔ R&D - Vostok-K Incident: Immersive Audio Drama on Personal Devices

Βι¶ΉΤΌΕΔ R&D - Evaluation of an immersive audio experience

Βι¶ΉΤΌΕΔ R&D - Exploring audio device orchestration with audio professionals

Βι¶ΉΤΌΕΔ R&D - Framework for web delivery of immersive audio experiences using device orchestration

Βι¶ΉΤΌΕΔ R&D - The Mermaid's Tears

Βι¶ΉΤΌΕΔ R&D - Talking with Machines

Βι¶ΉΤΌΕΔ R&D - Responsive Radio

This post is part of the Immersive and Interactive Content section

Topics