Production Expert

View Original

What Is A Dolby Atmos ADM?

Summary

‘ADM’ is an acronym which has entered more and more audio engineers’ vocabularies, but what exactly is an ADM? How is it different from a WAV file and what does ADM even stand for?

Going Deeper

When we were mixing in stereo, the deliverable at the end of the process was a regular audio file, usually a WAV file. With the increasing popularity of Dolby Atmos, a new deliverable has emerged, the ADM file. Most people know that this is the Dolby Atmos ‘master file’, as opposed to the various deliverables, which derive from it, but what does ADM stand for, and what does it mean? In this article, we're not going to explain the detail of ADM files, though, actually, when you know a little more about ADM, you'll probably think that calling these files, ADM files at all, is actually something of a misnomer. The inner workings of ADM isn't something that we, as the users of the technology actually need to have a deep understanding of, much like we don't need to understand C++ to use our plug-ins. However, knowing at least what ADM is and why it exists has to be a good thing.

In this article, we are going to take a superficial look at what ADM is, why it exists and look at examples of the ways in which defining the audio contained in our files is both necessary and helpful.

What is ADM?

ADM stands for Audio Definition Model and it does precisely that, it is a way to define what the audio you have contains and how it should be used. The data in the model isn’t the audio data itself, it is information about the audio i.e. it is metadata (data about data). ADM metadata is presented as XML, which is simple, widely used and ‘human readable’, by which it means it isn’t a bunch of ones and zeroes but I still, as an end user, have little desire to actually read it! What is more relevant to us is what kind of information it contains and why it is necessary.

If you have any knowledge of Dolby Atmos you may be familiar with the idea of attaching panning metadata to audio because that is central to the use of Objects in audio workflows. In a Dolby Atmos mix, you can either assign a track in your DAW to a Bed, in which case the familiar channel based audio workflow we are familiar with from stereo and surround formats works in essentially the same way. Or to an Object, in which case the audio also uses metadata to explain to the Dolby Atmos renderer how that particular audio should be treated. Particularly in terms of it panning. This is an example of using metadata, but it is only one such example.

You might think that metadata isn't relevant to channel-based audio, but actually it is. As soon as we have more than one audio channel to deal with, we need to understand what it is and how it relates to other audio channels which are part of the same program material. If we are dealing with stereo, we need to know which is the left of which is the right, if we are dealing with surround we need to know what each channel is, especially as there in these cases there is more than one standard channel order. In these examples simple labelling will suffice. But as complexity grows the need for something more flexible grows with it.

Nope! Im not going to say i understand this diagram of the overall ADM Structure (courtesy of the EBU website) but it does illustrate how the metadata is separate from the BWF audio file and that the Format part of the metadata is independent from the Content part, though they, and the BWF, all reference each other.

More Complexity, More Metadata

When dealing with more esoteric audio such as Ambisonics we need to know what each channel is so that it can be dealt with appropriately, and it is exactly this task that ADM seeks to answer. A technical point is that there are two uses of the word ‘Object’ when it comes to audio. Any audio which has metadata attached to it is technically an Object, but the term is used more widely, and specifically, to refer to audio which has positional panning data attached to it, particularly in Dolby Atmos. However we hear about audio Objects in other uses, for example in personalising audio content. A good example of an Object (in the non-Atmos-specific sense) which uses metadata for something other than placement in the soundfield would be different language versions. If you want a French language version ADM offers a way to identify which track is carrying French language dialogue as opposed to English or Spanish. The audio is defined. Things naturally get more sophisticated than this, for example it can be used to define groups of related channels which belong together, in ADM speak a ‘Pack’.

So the ADM describes what the audio is, both for specific technical details (e.g. panned directly overhead), or its content (e.g. it contains dialogue in French). This information is used by the renderer to either generate the necessary combination of channel based audio for the speaker system to represent that Object at that location, or to decide whether or not to play the audio at all (e.g. when selecting different language channels). The ADM doesn't tell the renderer how to do the rendering, but rather what the renderer has available to it and what the renderer ought to create from it.

The ‘ADM File’ we refer to as a master file for a Dolby ATMOS session is a BWF WAV file with all the necessary metadata for the Atmos renderer to create the mix, This metadata follows the standard set out in the Audio Definition Model and it is this metadata which makes this Broadcast Wav File an ‘ADM’. But the Audio Definition Model is flexible by design and is used far more widely than just for Dolby Atmos.

If you want to know more detail on the Audio Definition Model check out this comprehensive primer on the EBU website.

See this gallery in the original post