Βι¶ΉΤΌΕΔ

Research & Development

Posted by Chris Baume on , last updated

Ten years ago today, we launched a pioneering new way of working with academic partners. The Βι¶ΉΤΌΕΔ Audio Research Partnership initially brought together Βι¶ΉΤΌΕΔ R&D and five universities β€” Surrey, Salford, Queen Mary, York and Southampton. Since then, our journey has involved 26 different organisations from across academia and industry. Together, we have launched ten collaborative projects worth over Β£30M, sponsored 18 PhD students and hosted 13 internships.

The Βι¶ΉΤΌΕΔ has benefited hugely by working through this partnership. It has grounded our research with academic rigour, exposed us to regular peer review, kept us on the cutting edge and enabled strong, evidence-led decision making. We hope in turn that we've been a valuable partner to the universities by providing industry insight, production and development expertise, and input from our team of audio experts. Importantly, it has given us the opportunity to work with smart, talented people, foster new talent, and build a passionate community of audio experts.

In this blog post, we take a glimpse back to celebrate just some of the highlights from our ten years of collaboration. We reflect on how each has helped us deliver our ambition to develop future audio services that are accessible, personalised, adaptive and immersive.

Binaural

Unlike surround sound, which is often reserved for those with home cinema systems, binaural can deliver immersive three-dimensional sound over a standard pair of headphones. Not only that, but it is compatible with the Βι¶ΉΤΌΕΔ's existing stereo broadcast chain. These benefits make it a great option for broadcasting immersive listening experiences to everyone. However, it comes with its own challenges as each listener's experience is uniquely defined by their head/ear shape, head movements and the space in which they're listening.

We worked with a variety of collaborative projects and PhD students to perform a range of listening tests. These allowed us to gain a deep understanding of the opportunities and limitations of binaural and how best to record, simulate and render it convincingly. As part of our research with the , we produced - a radio drama and VR experience that featured at the Tribeca film festival and won the TVB Europe Award for Best Achievement in Sound.

The knowledge we gained informed the development of tools and techniques for producing and delivering high-quality binaural content. Our tools and training supported over 150 binaural productions, including an entire episode of Doctor Who. We assembled several binaural edit suites, created a series of , and built a tool that has been used to broadcast live binaural versions of the Proms since 2016. Our research also led to the development of software for rendering object-based audio in binaural, which we hope to release this year.

Device orchestration

It is becoming increasingly common for our audiences to have multiple audio playback devices around their home, such as smartphones, laptops, smart speakers and more. Researchers in our identified an opportunity to create an immersive listening experience by sending elements of a mix to different devices, which then play together in sync.

Over many years, we have been developing our understanding of the benefits and challenges of device orchestration. As part of this research, we created The Vostok-K Incident radio play, which was used for demonstration and experimentation. Producing orchestrated experiences is particularly difficult as the number of devices can change, and they need to stay synchronised. By working with university partners, we devised novel techniques for deciding which device to use for each audio element and for synchronising devices.

Our collaborative work led us to develop and release Audio Orchestrator - a free tool that over 180 people are using to create and deliver audio over connected devices. We enabled and supported a wide range of productions, including Decameron Nights, Pick a Part, Monster, Six Nations and Seeking New Gods. During the 2020 lockdown, we used the underlying technology to develop and release Βι¶ΉΤΌΕΔ Together, which allows people to watch or listen to Βι¶ΉΤΌΕΔ programmes together remotely.

Accessibility

A common complaint we receive concerns the intelligibility of speech in our broadcasts - one in every six listeners in the UK have some degree of hearing loss. To improve everybody's experience, we developed technology that enables each listener to adjust the audio mix to their hearing needs. It is not simply a case of increasing the volume of the speech, as some sound effects can be crucial to the storyline. Working with the University of Salford, we developed an approach of rating each sound by its "narrative importance". This allows the user to adjust the complexity of the mix to match their preference or hearing ability.

Following this new approach, we added an extra control to an iPlayer prototype and worked with Βι¶ΉΤΌΕΔ Studios to produce two episodes of Casualty with a narrative importance control. We had a great response to our public trial, which received significant press attention, including in The Times and The Today Programme, and it was shortlisted for both a Rose d'Or award and the EBU Technology and Innovation award. The findings from this work have allowed us to prioritise accessibility as a key use case in planning our next-generation audio services. We continue to develop the tools, protocols and techniques to be able to deliver this accessibility control to our audiences in the coming years.

Interactivity

As media is increasingly broadcast over the Internet, there are opportunities for delivering new types of listening experiences that were previously not possible. We have spent many years investigating "object-based audio" - where elements of an audio mix are delivered separately to the listener and mixed on their device. This flexibility allows us to tell stories and inform our audiences in new and interesting ways that are more personal, engaging and meaningful.

Working with our partners in the , we reimagined and built an for producing and delivering object-based audio. To demonstrate and test this end-to-end system, we worked with Βι¶ΉΤΌΕΔ Radio Drama to create The Mermaid's Tears. This interactive story could be experienced from the perspective of any of the three characters, and the listener could switch between them at any point. We broadcast a live performance of the interactive play using our object-based system, which was a world first.

By demonstrating this radical new experience and approach, we led the broadcast industry to view object-based audio as the future of the industry. Five years later, a variety of standards and solutions are now available that we can pick up and use to make these new experiences a reality.

Production tools

Delivering the experiences described above creates a whole set of challenges in how content is produced, stored, transmitted and replayed. For example, we must produce a set of detailed metadata that describes how the media elements in a programme are combined. As broadcasters strive to operate as efficiently as possible, we must find ways to make the production of this content economically viable.

By providing universities with access to Βι¶ΉΤΌΕΔ producers, we were able to identify inefficiencies in how transcripts are produced and used in editing. We then used artificial intelligence to make this process more efficient by creating a groundbreaking text-based audio editing system. By in real-life production and measuring its effectiveness, we gained the evidence to support the adoption of smart tools for efficient navigating and editing of content.

In the , we developed a for object-based audio. This work was used for many demonstrations of different concepts, including device orchestration and narrative importance. This culminated in work with the EBU to create the - a set of free and open tools for producing object-based audio. This software package not only enables anyone to produce and render object-based audio, but it leads the industry in showing what the production tools of the future can and should look like.

Community

As well as using this partnership to deliver improvements to our services, we wanted to bring people together to share what we learn with each other. Early on, we identified a need to bridge the link between those working in the audio technology community and in audio production.

We ran our first big event in 2015, called Sound: Now and Next, in the Βι¶ΉΤΌΕΔ Radio Theatre. Over two days, we hosted over 180 people and a technology fair with 26 posters and demos. Its success led to an ongoing partnership with the Βι¶ΉΤΌΕΔ Academy, with whom we organise a biannual conference called Sound Amazing. In 2020, our three-day public online conference featured BAFTA and Oscar-winning producers and pioneering researchers, attracting almost 5000 live views.

Thank you

The stories presented above are only the tip of a huge iceberg of important and impactful research projects over the past decade. We want to take this opportunity to say a big thank you to those we have worked with and everyone who has supported us on this journey. This is only the start, and we look forward to many more years of collaboration to come. If you want to join us on this journey or find out more, please contact us.

-

Βι¶ΉΤΌΕΔ R&D - Casualty, Loud and Clear - Our Accessible and Enhanced Audio Trial

Βι¶ΉΤΌΕΔ Taster - Casualty: A&E Audio

Βι¶ΉΤΌΕΔ R&D - 5 live Football Experiment: What We Learned


Immersive Audio Training and Skills from the Βι¶ΉΤΌΕΔ Academy including:

Sound Bites - An Immersive Masterclass

Sounds Amazing - audio gurus share tips

This post is part of the Immersive and Interactive Content section

Topics