MPEG-H Current State of Production Tools

Discussions regarding immersion and immersive audio as standalone topics or as technologies and methodologies applied in any creative audio sector.
Post Reply
User avatar
Pan Athen
SoundFellas Crew
Posts: 76
Joined: 04 Dec 2021 20:51
Location: Athens / Greece
Contact:

We currently have two major authoring toolkits to produce MPEG-H content.

The EAR Production Suite (abbreviated EPS): https://ear-production-suite.ebu.io/
And the MPEG-H Authoring Suite (abbreviated: MAS): https://www.iis.fraunhofer.de/en/ff/amm ... e/mas.html

Those two have their differences but they are quite similar as they provide the basic features of the ADM format using the MPEG-H technical recommendations.

In my opinion, the MAS seems that has the edge on UI/UX featuring all the tools in one window, while the EPS provides a modular environment with various separate plugins that work together.

The EPS in their last update also included a plugin that allows the authors to inject scene-based audio (specifically ambisonic and higher-order ambisonic content) in the MPEG-H format. I'm waiting to see if the MAS will also include ambisonic input as it is for me one of the best solutions for the inherited issue of "which reverb to use" in a channel-agnostic approach when most reverb plugins are channel-based.

Personally, I'm trying various ambisonic reverbs to see which one I should include in my own NGA production pipeline.

As NGA, ADM, and MPEG-H are used to develop production-ready tools, I think that it will slowly lead to a disruption of how we produce audio in the studio.

For now, NGA workflows seem to have solved the panning/spatialization problem in a channel-agnostic way, but there is still work to be done in the sectors of reverberation and other acoustic simulations that we use in production, like occlusion, distance, Doppler, and other.

At SoundFellas we created a mixing workflow we call True3D Audio™, you can learn more here: https://soundfellas.com/technologies/true3d-audio/. True3D Audio™ allows us to composite our audio production using open formats of high fidelity and quality, while at the same time being able to render to any format from stereo to surround and from ambisonics to NGA. It's a mix-one-render-anywhere solution that took us about 3 years to develop and test, so that it is ready to produce audio content in the latest mastering standards for any industry, including music, film, games, and extended reality.

Many of the custom solutions we used to create the acoustics that allows us the level of realism a 3D sound production needs can be found in our own Echotopia Soundscape Designer application (learn more here: https://soundfellas.com/software/echotopia/), and we are open to discussing with vendors that develop immersive tools in order to incorporate those solutions to future authoring solutions or open formats if possible.

By allowing us to inject scene-based content into the MPEG-H format and export it to ADM, the EPS is really a tool of value to our next-gen production pipeline, allowing us to produce immersive film sound, music, radio dramas, and more. I truly hope that MAS will also take this path and incorporate production features necessary for use out of the broadcasting and streaming sectors of the industry and into the film and music production sectors.

I would also like to see the ability to export to scene-based formats from both MAS and EPS so our production pipeline could also output game content using the same tools in authoring. A custom channel-based renderer would be also a huge welcome for productions that require custom channel configurations like theaters, installations, museums, stadiums, arenas, yoga studios, and high-end gaming.

Any thoughts are welcome.
Post Reply