Bringing together the world’s top audio research groups to develop the next generation of audio broadcast technology.
Project from 2011 - present
What we're doing
The BBC Audio Research Partnership was launched in 2011 to bring together some of the world's best audio technology researchers to work on pioneering new projects. The original partners were University of Surrey, University of Salford, Queen Mary University of London, University of Southampton, and University of York. Since then, we have partnered with many more research groups and organisations, and are always looking for opportunities to collaborate where there is a shared research interest.
Why it matters
Collaborating with university and industrial partners allows us to work directly with the best researchers and to steer the research to maximise the benefit to our audiences. By coming together, we can pool our resources to tackle some of the biggest challenges in broadcast. This partnership has led to pioneering developments in many areas including immersive audio, personalised and interactive content, object-based audio, accessibility, AI-assisted production, music discovery, audio augmented reality and enhanced podcasts.
Outcomes
Over the past decade or so, the partnership has given rise to a wide range of projects, including large-scale collaborative projects, PhD studentships, industrial placements and public events.
Public Events
We run a series of public events to celebrate the most exciting and innovative developments in audio, both creative and technical. You can read about each event and watch video recordings from some talks below:
Collaborative Projects
Several large-scale projects have resulted from the Audio Research Partnership. These have been funded by various bodies, including EPSRC, UKRI, AHRC, and EC, with a total portfolio size in excess of £30M.
| Dates | Project | Partners | Description |
|---|---|---|---|
| 2021‑2026 | AI4ME | University of Surrey, Lancaster University | Using AI and OBM to enable media experiences that adapt to individual preferences, accessibility requirements, devices and location. |
| 2020‑2025 | AI for Sound | University of Surrey | Using application sector use cases to drive advances in core research on machine learning for sound. |
| 2019-2027 | AI + Music CDT | Queen Mary University of London | Combining state-of-the-art ability in artificial intelligence, machine learning and signal processing. |
| 2019-2024 | XR Stories | University of York | The future of immersive and interactive storytelling. |
| 2019-2021 | Polymersive | Imrsvray, University of Surrey | Building tools to produce six degrees-of-freedom immersive content that combines lightfield capture and spatial audio. |
| 2016-2019 | Making Sense of Sounds | University of Surrey and University of Salford | Using machine learning to extract information about non-speech and non-music sounds. |
| 2014-2019 | FAST IMPACt | QMUL, University of Oxford, University of Nottingham | Fusing audio and semantic technologies for intelligent music production and consumption |
| 2013-2019 | S3A Future Spatial Audio | University of Surrey, University of Salford, University of Southampton | Advanced personalised and immersive audio experiences in the home, using spatial and object-based audio. |
| 2015-2018 | ORPHEUS | IRT, Bayerischer Rundfunk, Fraunhofer IIS, IRCAM, B-COM, Trinnov Audio, Magix, Elephantcandy, Eurescom | Creating an end-to-end object-based audio broadcast chain. |
| 2013-2016 | ICoSOLE | Joanneum Research, Technicolor, VRT, iMinds, Bitmovin, Tools at Work | Investigating immersive coverage of large-scale live events. |
PhD Projects
We have sponsored or hosted the following PhD students, covering a variety of topics:
| Dates | Student | University | Description |
|---|---|---|---|
| 2020‑2024 | Jay Harrison | York | Context-aware personalised audio experiences |
| 2020-2024 | David Geary | York | Creative affordances of orchestrated devices for immersive and interactive audio and audio-visual experiences |
| 2020-2024 | Jemily Rime | York | Interactive and personalised podcasting with AI-driven audio production tools |
| 2020-2024 | Harnick Khera | QMUL | Informed Source Separation for Multi-Mic Production |
| 2019-2023 | Angeliki Mourgela | QMUL | Automatic Mixing for Hearing Impaired Listeners |
| 2018-2022 | Jeff Miller | QMUL | Music recommendation for BBC Sounds |
| 2018-2021 | Daniel Turner | York | AI-Driven Soundscape Design for Immersive Environments |
| 2016-2021 | Craig Cieciura | Surrey | Device orchestration rendering rules |
| 2019-2020 | Adrià Cassorla | York | Binaural monitoring for orchestrated experiences |
| 2016-2020 | Lauren Ward | Salford | Improving broadcast accessibility for hard of hearing individuals: using object-based audio personalisation and narrative importance |
| 2012-2019 | Chris Pike | York | Evaluating the Perceived Quality of Binaural Technology |
| 2013-2018 | Chris Baume | Surrey | Semantic Audio Tools for Radio Production |
| 2014-2018 | Michael Cousins | Southampton | The Diffuse Sound Object |
| 2014-2018 | Tim Walton | Newcastle | The Quality of Experience of Next Generation Audio |
| 2011-2016 | Darius Satongar | Salford | Simulation and Analysis of Spatial Audio Reproduction and Listening Area Effects |
| 2011-2015 | Paul Power | Salford | Subjective Evaluation of 3D Surround Systems |
| 2011-2015 | Anthony Churnside | Salford | Object-Based Radio: Effects On Production and Audience Experience |
| 2011-2015 | Tobias Stokes | Surrey | Improving the perceptual quality of single-channel blind audio source separation |
Industrial Placements
On occasion, we host short industrial placements from PhD or Masters students:
| Year | Student | University | Description |
|---|---|---|---|
| 2021 | Josh Gregg | York | Audio personalisation for Accessible Augmented Reality Narratives |
| 2020 | Edgars Grivcovs | York | Audio Definition Model production tools for NGA and XR |
| 2020 | Danial Haddadi | Manchester | Audio device orchestration tools and trial productions |
| 2019 | Valentin Bauer | QMUL | Audio Augmented Reality |
| 2019 | Ulfa Octaviani | QMUL | Remote study on enhanced podcast interaction |
| 2019 | Emmanouil Theofanis Chourdakis | QMUL | Automatic mixing for object-based media |
| 2018 | Jason Loveridge | York | Device simulation plug-in |
| 2016 | Michael Romanov | IEM | Ambisonics and renderer evaluation |
| 2014 | Adib Mehrabi | QMUL | Music thumbnailing for BBC Music |
| 2014 | James Vegnuti | QMUL | User experience of personalised compression using the Web Audio API |
| 2013 | Nick Jillings Zheng Ma | QMUL | Personalised compression using the Web Audio API |
| 2011 | Martin Morrell | QMUL | Spatial audio system design for surround video |
Project Team
Project Partners

Immersive and Interactive Content section
IIC section is a group of around 25 researchers, investigating ways of capturing and creating new kinds of audio-visual content, with a particular focus on immersion and interactivity.
Search by Tag:
- Tagged with ProjectsProjects
- Tagged with AudioAudio
- Tagged with Immersive and Interactive ContentImmersive and Interactive Content



