
Back in June 2018 we reported on the release of the EBU ADM Renderer (EAR), which is specified in EBU Tech 3388. This is a system for rendering the types of content defined by the Audio Definition Model (ADM) to any defined loudspeaker layout.
The EAR is part of our wider efforts to standardise open formats for working with so-called 'next-generation audio' (NGA), which aims to make audio experiences which are more accessible and immersive. In addition to the ADM as a way of representing NGA audio formats, a renderer (such as the EAR) is an important piece of the puzzle, since it defines what the parameters in the format definition mean in terms of signals that are played out of the speakers.
In July, the ITU published ITU-R Recommendation BS.2127-0 "Audio Definition Model renderer for advanced sound systems", which is based on the EAR algorithm. The ITU ADM Renderer also comes with a python implementation, which allows developers to try out the algorithm easily. While the python implementation is great for experimental work, understanding the algorithm and validating ADM files, it can't be used for real-time applications. Therefore, a C++ library containing the core EAR functionality has been developed.
Libear
We worked with IRT in developing libear as a collaboration within the EBU. This library is available under the permissive Apache 2.0 licence. Libear contains just the core parts of the EAR project (calculation of gains and some DSP components); we recommend using it with libbw64 and libadm (both developed by IRT) when developing applications which also need to read, write and process ADM content.
There are several potential applications for a library like libear. It could be included in a Digital Audio Workstation (DAW) to render NGA content, either integrated directly or as a suite of plugins. It could be built into a stand-alone ADM monitoring system, or used to render ADM content to legacy formats before emission.
The current release of the library supports channel-based, scene-based and object-based audio, though some parameters are not yet supported; full details can be found in the documentation. The API is already complete, so there's no need to wait before starting integration work.
Do feel free (and it is free!) to download the libear library, and have a go at building it and integrating it into your applications. You can also read IRT's latest blog on this topic.
What's Next?
We'll continue to work on libear in the coming months, implementing the missing features and responding to feedback from users. We've got a few projects using libear already; we'll publish more details when they are released.
- Tweet This - Share on Facebook
- IRT - More open-source for open object-based audio workflows
- BBC R&D - Casualty, Loud and Clear - Our Accessible and Enhanced Audio Trial
- BBC R&D - The Mermaid's Tears
- BBC R&D - Audio Research
- BBC R&D - Responsive Radio
- BBC R&D - 5 live Football Experiment: What We Learned
- BBC R&D - Object-Based Media
- BBC R&D - ORPHEUS
- BBC R&D - Cook-Along Kitchen Experience
- Immersive Audio Training and Skills from the BBC Academy including:
- Spatial audio: Where do I start?
- 3D surround sound for the headphone generation
- Sound Bites - An Immersive Masterclass
- Sounds Amazing - audio gurus share tips

Immersive and Interactive Content section
IIC section is a group of around 25 researchers, investigating ways of capturing and creating new kinds of audio-visual content, with a particular focus on immersion and interactivity.
Topics
Search by Tag:
- Tagged with BlogBlog
- Tagged with AudioAudio
- Tagged with ImmersionImmersion
- Tagged with Immersive and Interactive ContentImmersive and Interactive Content

