Bringing together the world’s top audio research groups to develop the next generation of audio broadcast technology.
Project from 2011 - present
What we're doing
The Βι¶ΉΤΌΕΔ Audio Research Partnership was launched in 2011 to bring together some of the world's best audio technology researchers to work on pioneering new projects. The original partners were University of Surrey, University of Salford, Queen Mary University of London, University of Southampton, and University of York. Since then, we have partnered with many more research groups and organisations, and are always looking for opportunities to collaborate where there is a shared research interest.
Why it matters
Collaborating with university and industrial partners allows us to work directly with the best researchers and to steer the research to maximise the benefit to our audiences. By coming together, we can pool our resources to tackle some of the biggest challenges in broadcast. This partnership has led to pioneering developments in many areas including immersive audio, personalised and interactive content, object-based audio, accessibility, AI-assisted production, music discovery, audio augmented reality and enhanced podcasts.
Outcomes
Over the past decade or so, the partnership has given rise to a wide range of projects, including large-scale collaborative projects, PhD studentships, industrial placements and public events.
Public Events
We run a series of public events to celebrate the most exciting and innovative developments in audio, both creative and technical. You can read about each event and watch video recordings from some talks below:
Collaborative Projects
Several large-scale projects have resulted from the Audio Research Partnership. These have been funded by various bodies, including , , , and , with a total portfolio size in excess of Β£30M.
Dates | Project | Partners | Description |
---|---|---|---|
2021‑2026&²Τ²ϊ²υ±θ;&²Τ²ϊ²υ±θ; | University of Surrey, Lancaster University | Using AI and OBM to enable media experiences that adapt to individual preferences, accessibility requirements, devices and location. | |
2020‑2025&²Τ²ϊ²υ±θ;&²Τ²ϊ²υ±θ; | University of Surrey | Using application sector use cases to drive advances in core research on machine learning for sound. | |
2019-2027 | Queen Mary University of London | Combining state-of-the-art ability in artificial intelligence, machine learning and signal processing. | |
2019-2024 | University of York | The future of immersive and interactive storytelling. | |
2019-2021 | Imrsvray, University of Surrey | Building tools to produce six degrees-of-freedom immersive content that combines lightfield capture and spatial audio. | |
2016-2019 | University of Surrey and University of Salford | Using machine learning to extract information about non-speech and non-music sounds. | |
2014-2019 | QMUL, University of Oxford, University of Nottingham | Fusing audio and semantic technologies for intelligent music production and consumption | |
2013-2019 | University of Surrey, University of Salford, University of Southampton | Advanced personalised and immersive audio experiences in the home, using spatial and object-based audio. | |
2015-2018 | IRT, Bayerischer Rundfunk, Fraunhofer IIS, IRCAM, B-COM, Trinnov Audio, Magix, Elephantcandy, Eurescom | Creating an end-to-end object-based audio broadcast chain. | |
2013-2016 | Joanneum Research, Technicolor, VRT, iMinds, Bitmovin, Tools at Work | Investigating immersive coverage of large-scale live events. |
PhD Projects
We have sponsored or hosted the following PhD students, covering a variety of topics:
Dates | Student | University | Description |
---|---|---|---|
2020‑2024&²Τ²ϊ²υ±θ;&²Τ²ϊ²υ±θ; | Jay Harrison | York | Context-aware personalised audio experiences |
2020-2024 | David Geary | York | Creative affordances of orchestrated devices for immersive and interactive audio and audio-visual experiences |
2020-2024 | Jemily Rime | York | Interactive and personalised podcasting with AI-driven audio production tools |
2020-2024 | Harnick Khera | QMUL | Informed Source Separation for Multi-Mic Production |
2019-2023 | Angeliki Mourgela | QMUL | Automatic Mixing for Hearing Impaired Listeners |
2018-2022 | Jeff Miller | QMUL | Music recommendation for Βι¶ΉΤΌΕΔ Sounds |
2018-2021 | Daniel Turner | York | AI-Driven Soundscape Design for Immersive Environments |
2016-2021 | Craig Cieciura | Surrey | Device orchestration rendering rules |
2019-2020 | ΄΅»ε°ωΎ±Γ &²Τ²ϊ²υ±θ;°δ²Ή²υ²υ΄Η°ω±τ²Ή | York | Binaural monitoring for orchestrated experiences |
2016-2020 | Lauren Ward | Salford | |
2012-2019 | Chris Pike | York | |
2013-2018 | Chris Baume | Surrey | |
2014-2018 | Michael Cousins | Southampton | |
2014-2018 | Tim Walton | Newcastle | |
2011-2016 | Darius Satongar | Salford | |
2011-2015 | Paul Power | Salford | |
2011-2015 | Anthony Churnside | Salford | |
2011-2015 | Tobias Stokes | Surrey |
Industrial Placements
On occasion, we host short industrial placements from PhD or Masters students:
Year | Student | University | Description |
---|---|---|---|
2021 | Josh Gregg | York | Audio personalisation for Accessible Augmented Reality Narratives |
2020 | Edgars Grivcovs | York | Audio Definition Model production tools for NGA and XR |
2020 | Danial Haddadi | Manchester | Audio device orchestration tools and trial productions |
2019 | Valentin Bauer | QMUL | Audio Augmented Reality |
2019 | Ulfa Octaviani | QMUL | Remote study on enhanced podcast interaction |
2019 | Emmanouil Theofanis Chourdakis | QMUL | Automatic mixing for object-based media |
2018 | Jason Loveridge | York | Device simulation plug-in |
2016 | Michael Romanov | IEM | Ambisonics and renderer evaluation |
2014 | Adib Mehrabi | QMUL | Music thumbnailing for Βι¶ΉΤΌΕΔ Music |
2014 | James Vegnuti | QMUL | User experience of personalised compression using the Web Audio API |
2013 | Nick Jillings Zheng Ma |
QMUL | Personalised compression using the Web Audio API |
2011 | Martin Morrell | QMUL | Spatial audio system design for surround video |
Project Team
Project Partners
-
Immersive and Interactive Content section
IIC section is a group of around 25 researchers, investigating ways of capturing and creating new kinds of audio-visual content, with a particular focus on immersion and interactivity.