Audio Research Partnership

Working with leading university partners to push the boundaries of audio technology.

Published: 1 January 2011

Bringing together the world’s top audio research groups to develop the next generation of audio broadcast technology.

Project from 2011 - present


What we're doing

The BBC Audio Research Partnership was launched in 2011 to bring together some of the world's best audio technology researchers to work on pioneering new projects. The original partners were University of Surrey, University of Salford, Queen Mary University of London, University of Southampton, and University of York. Since then, we have partnered with many more research groups and organisations, and are always looking for opportunities to collaborate where there is a shared research interest.


Why it matters

Collaborating with university and industrial partners allows us to work directly with the best researchers and to steer the research to maximise the benefit to our audiences. By coming together, we can pool our resources to tackle some of the biggest challenges in broadcast. This partnership has led to pioneering developments in many areas including immersive audio, personalised and interactive content, object-based audio, accessibility, AI-assisted production, music discovery, audio augmented reality and enhanced podcasts.


Outcomes

Over the past decade or so, the partnership has given rise to a wide range of projects, including large-scale collaborative projects, PhD studentships, industrial placements and public events.

Public Events

We run a series of public events to celebrate the most exciting and innovative developments in audio, both creative and technical. You can read about each event and watch video recordings from some talks below:

Collaborative Projects

Several large-scale projects have resulted from the Audio Research Partnership. These have been funded by various bodies, including EPSRCUKRIAHRC, and EC, with a total portfolio size in excess of £30M.

DatesProjectPartnersDescription
2021‑2026  AI4MEUniversity of Surrey, Lancaster UniversityUsing AI and OBM to enable media experiences that adapt to individual preferences, accessibility requirements, devices and location.
2020‑2025  AI for SoundUniversity of SurreyUsing application sector use cases to drive advances in core research on machine learning for sound.
2019-2027AI + Music CDTQueen Mary University of LondonCombining state-of-the-art ability in artificial intelligence, machine learning and signal processing.
2019-2024XR StoriesUniversity of YorkThe future of immersive and interactive storytelling.
2019-2021PolymersiveImrsvray, University of SurreyBuilding tools to produce six degrees-of-freedom immersive content that combines lightfield capture and spatial audio.
2016-2019Making Sense of SoundsUniversity of Surrey and University of SalfordUsing machine learning to extract information about non-speech and non-music sounds.
2014-2019FAST IMPACtQMUL, University of Oxford, University of NottinghamFusing audio and semantic technologies for intelligent music production and consumption
2013-2019S3A Future Spatial AudioUniversity of Surrey, University of Salford, University of SouthamptonAdvanced personalised and immersive audio experiences in the home, using spatial and object-based audio.
2015-2018ORPHEUSIRT, Bayerischer Rundfunk, Fraunhofer IIS, IRCAM, B-COM, Trinnov Audio, Magix, Elephantcandy, EurescomCreating an end-to-end object-based audio broadcast chain.
2013-2016ICoSOLEJoanneum Research, Technicolor, VRT, iMinds, Bitmovin, Tools at WorkInvestigating immersive coverage of large-scale live events.

PhD Projects

We have sponsored or hosted the following PhD students, covering a variety of topics:

DatesStudentUniversityDescription
2020‑2024  Jay HarrisonYorkContext-aware personalised audio experiences
2020-2024David GearyYorkCreative affordances of orchestrated devices for immersive and interactive audio and audio-visual experiences
2020-2024Jemily RimeYorkInteractive and personalised podcasting with AI-driven audio production tools
2020-2024Harnick KheraQMULInformed Source Separation for Multi-Mic Production
2019-2023Angeliki MourgelaQMULAutomatic Mixing for Hearing Impaired Listeners
2018-2022Jeff MillerQMULMusic recommendation for BBC Sounds
2018-2021Daniel TurnerYorkAI-Driven Soundscape Design for Immersive Environments
2016-2021Craig CieciuraSurreyDevice orchestration rendering rules
2019-2020Adrià CassorlaYorkBinaural monitoring for orchestrated experiences
2016-2020Lauren WardSalfordImproving broadcast accessibility for hard of hearing individuals: using object-based audio personalisation and narrative importance
2012-2019Chris PikeYorkEvaluating the Perceived Quality of Binaural Technology
2013-2018Chris BaumeSurreySemantic Audio Tools for Radio Production
2014-2018Michael CousinsSouthampton The Diffuse Sound Object
2014-2018Tim WaltonNewcastleThe Quality of Experience of Next Generation Audio
2011-2016Darius SatongarSalfordSimulation and Analysis of Spatial Audio Reproduction and Listening Area Effects
2011-2015Paul PowerSalfordSubjective Evaluation of 3D Surround Systems
2011-2015Anthony Churnside  SalfordObject-Based Radio: Effects On Production and Audience Experience
2011-2015Tobias StokesSurreyImproving the perceptual quality of single-channel blind audio source separation

Industrial Placements

On occasion, we host short industrial placements from PhD or Masters students:

YearStudentUniversityDescription
2021  Josh GreggYorkAudio personalisation for Accessible Augmented Reality Narratives
2020Edgars GrivcovsYorkAudio Definition Model production tools for NGA and XR
2020Danial HaddadiManchester Audio device orchestration tools and trial productions
2019Valentin BauerQMULAudio Augmented Reality
2019Ulfa OctavianiQMULRemote study on enhanced podcast interaction
2019Emmanouil Theofanis ChourdakisQMULAutomatic mixing for object-based media
2018Jason LoveridgeYorkDevice simulation plug-in
2016Michael RomanovIEMAmbisonics and renderer evaluation
2014Adib MehrabiQMULMusic thumbnailing for BBC Music
2014James VegnutiQMULUser experience of personalised compression using the Web Audio API
2013Nick Jillings
Zheng Ma
QMULPersonalised compression using the Web Audio API
2011Martin MorrellQMULSpatial audio system design for surround video

Project Team

  • Chris Baume

    Chris Baume

    Lead Research Engineer
  • Jon Francombe

    Jon Francombe

    Lead Research & Development Engineer
  • Chris Pike

    Chris Pike

    Lead R&D Engineer - Audio
  • Immersive and Interactive Content section

    IIC section is a group of around 25 researchers, investigating ways of capturing and creating new kinds of audio-visual content, with a particular focus on immersion and interactivity.

Search by Tag:

Rebuild Page

The page will automatically reload. You may need to reload again if the build takes longer than expected.

Useful links

Demo mode

Hides preview environment warning banner on preview pages.

Theme toggler

Select a theme and theme mode and click "Load theme" to load in your theme combination.

Theme:
Theme Mode: