Production Expert

View Original

Is Normalisation Still Important?

In audio to ‘Normalise’ refers to a method of matching the level of an audio file to a chosen target value. To many people of my age and older normalise was a function of audio editing software which could be used to pre-process a sample to make it as loud as possible without clipping. I can remember using my copy of Cool Edit Pro to process drum loops before saving them onto floppies for use with an Akai sampler. In those days Normalising usually meant peak normalising to 0dBFS. A shortcut to get as much level as possible without clipping.

Those were simpler times and before long I started to find out that it could be a good idea to normalise to a level other than 0dB, and soon I discovered the potential of normalising a collection of files so they sounded approximately the same level using RMS normalisation, something which definitely didn’t happen if you used peak normalisation.

All of this was a very long time ago and things are very different today but, although it’s unlikely anyone will be peak normalising to 0dB very often in 2023, normalising hasn’t gone away.

Peak Normalization

Peak Normalisation is the simplest form of normalisation. A target to normalise to is specified, the computer finds the highest sample value in the audio being processed and it applies the gain necessary to raise or lower that peak value to the target to the whole file. At one time this was the way to get your audio as loud as possible without clipping and it didn’t change the dynamics of the audio, just the level. Tools like the Waves L1 limiter had already been introduced by the mid nineties, though it wasn’t until the turn of the millennium that things really hotted up with the release of L2. 

The desire to make the overall level louder than this limitation imposed by the loudest peak meant that look-ahead limiters and the move from 16 to 24 bits, and now 32 bit floating point audio means that peak normalisation is no longer as important as it once was. However the basic process remains similar to this day.

True Peak Vs Sample Peak

Unfortunately just because you’ve normalised your audio to 0dBFS it doesn’t mean that your audio won’t clip. This is because of the way converters work, the sample peak value isn’t the end of the story. Peaks above 0dB can happen and ‘true peak’ is a reading you’ll find in well specified metering and limiting plugins. However even if you’re not getting a true peak clip, depending on where your audio is going to end up it might well be wise to leave a surprising amount of headroom below 0dBFS. Lossy compression algorithms are particularly susceptible to these issues.

This is quite a big subject but it’s very worth getting familiar with these issues. The best explanation I’ve seen on this is from a presentation by Thomas Lund, formerly of TC Electronic, now at Genelec. The video below is old now, it’s long and detailed but it’s one of the most enlightening I’ve seen on this subject. Highly recommended 

RMS Normalisation

Root Mean Square (RMS) normalisation takes a different approach by considering the average power of the audio signal. RMS normalization analyzes the amplitude of the entire audio file and adjusts it to a specific target level. This technique provides a more balanced normalization approach, taking into account the energy distribution throughout the audio.

By utilizing RMS normalization, audio professionals could ensure that the average level of the audio remains more consistent than using peak normalisation, resulting in a more even playback experience. However the problem with RMS is that RMS normalisation does not directly address the subjective percieved loudness of the audio. What we experience as ‘loud’ is much more complicated than that and doesn’t correlate particularly directly with RMS levels.

How To Measure Loudness

In the same way that, while we all recognise intelligence when we see it there is no accurate way to measure it (IQ tests aren’t accurate if that’s what you are thinking), it’s only relatively recently that we have developed a reliable way to measure perceived loudness and the resulting Loudness Unit (LU). This was a huge deal when it was introduced and this site has done more than most to spread awareness and understanding of Loudness. If you need to know more, check out our article Loudness - Everything You Need To Know

Loudness Normalisation

If you can reliably measure loudness then you can perform Loudness Normalization and achieve that goal of ensuring that every piece of audio has the same perceived loudness.

Streaming platforms, such as Spotify and Apple Music, employ Loudness Normalization to provide a consistent listening experience for their users. By using algorithms that measure the loudness of each track, streaming platforms adjust the playback level to a standardised loudness target, expressed in LUFS (Loudness Units Full Scale).

This might sound like a technical detail which, while relevant to the people producing content, isn’t something the average listener will be interested in but unlike many (most?) technical details in audio loudness normalisation has really improved the experience of the listening public. When used in broadcast or for film loudness specs mean that the listener enjoys a more consistent experience, without the need to adjust the volume during ad breaks or from song to song when streaming music. Loudness normalisation has proved to be the development which finally ended the ‘Loudness War’ where for years music had prioritised attention grabbing level at the expense of quality.

Album Normalization

The benefits of loudness normalisation have been profound, but it’s not perfect. One of the things which it assumed was that all music was supposed to be the same volume. That just isn’t the case. A full-on track is supposed to sound louder than a sparse ballad, something mastering engineers working on album projects have understood forever. This has led to an approach called Album Normalisation where artistic intention is factored in, allowing for a desired rise and fall through the duration of a long form work.

To find out more about the role of loudness normalisation in streaming, including a discussion of album normalisation check out our podcast with Rob Byers and Bob Katz for the details from people who really know their stuff!

See this content in the original post

Although the use of floating point approaches in DAW mixers, plugins and in the file types used in DAWs which have the ability to address many clipping issues, has made things more forgiving when it comes to managing levels, the role normalising has played, and continues to play in professional audio can’t be underestimated. Understanding the common ground between old fashioned peak normalisation and current loudness normalisation, and how True Peak, Integrated Loudness and Loudness Range measurements all relate to these approaches to normalising based on the peak or loudness values of your audio and the role limiters take in manipulating these values is fundamental to the work of any audio professional.

Peak normalising isn’t something I’ve done often recently but it casts a long shadow. If you ever find yourself wondering if the clip gain you just dialled in on some audio is going to make it clip at some point, maybe you need to open normalise instead, it’s still there even if you haven’t used it recently! 

See this gallery in the original post

Photo by Matthias Groeneveld