Production Expert

View Original

Is Real Time Noise Reduction Better Than Offline?

In this article Damian Kearns considers the two approaches to noise reduction - real time and offline and asks which is better and why.

Do you prefer turning noise down in real time or letting a 3rd Party program do the processing for you? Which one works better: hardware or software? What’s the best approach? In this article, we aim to break down the choices.

A Bit Of History: From Dolby to CEDAR

Before I get into answering the tough question of which noise reduction method works best, I thought it might be useful to provide a brief bit of history to gain a little context on the development and workings of these critical tools. In these times when “RX’ing” something has become a commonly used industry verb, some other companies did a lot of work to bring us to the tools we use today and it’s worth pointing out.

To me, the story starts with Ray Milton Dolby OBE (1933-2013). Ray’s company, Dolby Laboratories, began developing noise reduction hardware in the mid 1960’s to counteract the inherent noise floor of analogue recording, with Dolby A being the first professional product available to recording studios. Dolby A was a simple 4 band, 4 filter expander/compressor unit. This ‘compander’ had a threshold of -40 and a 2:1 ratio, yielding about a 10 dB noise reduction, possibly increasing to 15 dB at 15 kHz. Given that lower frequency audio isn’t burned into tape as deeply as higher frequencies, this it not a surprising revelation. Another point to be made is that using filters to affect noise reduction does, in fact, inherently add phase distortion to audio. This is always a bit of a trade-off, whether using an EQ or any other type of processing that filters audio over a frequency range.

The first time I encountered noise reduction hardware was likely Dolby A attached to a 1/4’, 2 track machine or B, C and S types in consumer and professional cassette decks. In my early career, Dolby SR cards were installed on every channel of the Studer, MCI and Sony multitrack 2, 16 and 24 track machines I used (as well as film dubbers), to wrestle down the noise floor of analogue tape. Dolby SR, as opposed to Dolby A, could provide up to 25 dB of noise reduction. It had a great sound and many of us who used it still think of SR on analogue tape as the apex of audio.

One day, my boss sat me down and showed me how he used a Dolby CAT43 device to shape the noise floor of his dialogue premix. The ability to manually control 4 separate bands of noise reduction in real time put the CAT43 on every serious post audio mixing stage I worked in Toronto and Vancouver. Later supplanted by the Dolby 430 and CEDAR DNS1000, the CAT43 used a sort of expansion/compression scheme called ‘pre-emphasis’ which boosts frequency content above a threshold (typically set around the noise floor) at the input to its hardware and reduced it on the output, thereby driving the noise floor lower in level. The amount of reduction was determined by the slider positions. It’s important to understand all of the Dolby noise reduction offerings required proprietary hardware. These were not PC-based solutions. Computers were not generally used for this sort of thing back then, though the work to develop cost effective PC-based and Mac-based solutions was well underway.

The Waves W43 is the modern plugin alternative to this historical CAT43 hardware set.

Waves W43 Plugin based on Dolby CAT43 Hardware

CEDAR started looking into PC-based audio restoration in 1983, working with Neve Electronics and Cambridge University to eventually develop a prototype in 1987. In 1990, CEDAR started offering PC-based solutions. Throughout the 1990s, CEDAR continued to produce various products that incorporated proprietary hardware and PC’s to provide serious solutions for noise reduction.

In 2000, CEDAR introduced us to the DNS1000, and then the Pro Tools friendly DNS2000. These products opened up the proliferation of noise reduction options, not just for large facilities, but also to the small and midsized studios struggling to clean up field recordings and audio archives.

I used a Dolby 430 a lot during the late 90’s but hardware and software offerings by CEDAR and their competitors brought us into the modern age of digital noise reduction in ways that the venerable Dolby 430 could not. CEDAR products for Pro Tools were just superior in every way so that the old Dolby hardware, as helpful as it was, fell quickly from use.

CEDAR DNS 1000

Multiband expansion/compression schemes are at the heart of many of the real time noise suppression schemes available. They typically involve algorithms that are written or trained to identify noise and adjust their responses appropriately. The computer-based, real time algorithms are usually written to function well during playback. They are less demanding on computer system resources than deeper, machine-learning algorithms used by today’s advanced software offerings.

Due to their complexity, computer-based AI-driven noise reduction software— though sometimes available as real time processing- typically shine as offline processing. They are able to perform more complex operations without introducing any sort of noticable delay. For instance, iZotope’s RX Guitar De-noise can be operated in real time but check out the delay in this picture, 13311 samples! This happens to be the best tool I own for buzz removal from dialogue and the reason I use it offline, rather than real time, is so I can target frequency bands with precision inside their spectrographic editor.

Guitar De-noise by iZotope

As mentioned, some real time software requires physical hardware to optimize processing and keep system delay to a minimum. We have various offerings from CEDAR—who really dominate this realm- and recently Universal Audio got into the game, thanks to CEDAR and C-Suite Audio, with their C-Vox software. This software utilizes the UA DSP hardware to essentially do what CEDAR’s own hardware has been doing for decades. 

A Sonic Solution?

Sonic Solutions NoNoise TDM Broadband Denoising

US-based Sonic Solutions was the groundbreaker for Mac-based noise reduction. When I worked for the Canadian Broadcasting Corporation we had a couple of these standalone digital audio workstations, which were sound editing systems with the noise reduction and CD premastering capabilities built in. I think the two DAW’s cost about as much as a good sized middle class suburban house!

By today’s measurement, Mac-based Sonic Solutions NoNoise (aka the Sonic System) was extremely effective but very slow; often taking all night to process long format content. The system was revolutionary though and if you take a look at one of their later NoNoise TDM plugins for Pro Tools from their software suite (pictured above), you can see Sonic Solutions really set the stage for everything we expect from our DAW’s today. Unsurprisingly, this almost sci-fi achievement came from a group of former Lucasfilm employees: Robert Doris, James ‘Andy’ Moorer and Mary Sauer. They started the company in 1986.

One great trick to using NoNoise effectively was to sample small bits of content and process these samples, to hear the results quickly. This way, the operator could be reasonably sure of the end result before starting the offline processing. I use this technique today, when I’m dealing with lengthy files, though I’m really now just saving minutes, rather than hours of work.

If you want to read more about this legendary aspect of audio history, I’ve included a link to an article underneath the picture above.

Spectral Editing

iZotope’s RX 9, tackling hum and buzz

CEDAR invented and patented this next form of noise reduction tool as well.


Programs like iZotope’s RX 9 have popularized ‘spectral editing’, a visual interface used to apply various methods of noise reduction. Spectral editing can be considered time-frequency domain editing, since time and frequency no longer need to be linked to one another in the noise reduction process. The user can manually reduce time and or frequency selections to precisely reduce, remove or eliminate undesired aspects of audio signals while leaving desired content, untouched. This offline approach has become a universally accepted standard method of audio processing over the past 10 years or so and whether or not you use their specific software, iZotope has pioneered this method.

And Now…

Sonnox Oxford DeNoiser

I’ve been using noise reduction on my audio in one form or another for over 26 years now and yet I still have some moments where I’m at a loss as to which method might be the right approach. Do I offline process or real time process my audio? There are benefits and compromises to both methods. Let’s try to figure out which one works best.


Is Real Time Processing The Way To Go?

The Pros:

  • Can be applied inside a mix in context with all other elements present, including picture.

  • Can be automated and ‘mixed’ so parameters can be altered in real time.

  • Can be applied in context with other elements while recording or mixing for live events or location recordings.

  • Can be much simpler to use than offline tools. 

The Cons:

  • Typically fewer options for fine control or detail editing than strictly software solutions.

  • System resources can easily be strained by multiple instances of real time NR plugins in a DAW session.

  • Algorithms can often be stripped down versions of their offline counterparts, yielding lower quality results.

  • If sessions are handed off to other editors or mixers, real time processing requires the next user to have the same plugin/hardware or there needs to be a mixdown prior to sending out (then becoming an offline bounce solution)

  • Real time processing is often tied to hardware DSP units, limiting portability or requiring more expensive hardware infrastructure. This can also eliminate the ability to Offline Bounce a Mix and stems (C-Suite Audio’s C-Vox is a notable exception that runs on UA hardware).

Klevgrand’s Brusfri 2 uses multiple gates instead of filters to reduce noise

Is Offline Processing The Way To Go?

The Pros:

  • NR is most often applied in less time than real time unless the host computer is slow to process. 

  • Offline noise reduction profiles can often be more accurate, allowing access to more advanced algorithms than might reasonably operate in real time.

  • Processing can in some cases be applied multiple stages, to selected time and/or frequency content. 

  • No need for costly external hardware, reducing operating overhead.

  • Settings can be saved as presets or chains and recalled for later use.

  • Processing can be applied to batches of files at once. 

  • More likely to be easily available to subcontractors and other collaborators.

  • Video and audio workstations can often retain settings when transferring via AAF

The Cons:

  • Offline processing typically cannot be auditioned in place with other elements in a mix.

  • Offline processing might take extra time in certain circumstances when the processing applied to multiple clips in a track might not work for every clip, leading to undo/redo operations. 

  • Over-processed clips are much harder to undo than simply scaling back processing in real time.

Which Way Should I Go?

This is the question many of us ask ourselves on a daily basis. The factors that can determine our choices are:

  • The tools we have at our disposal

  • Familiarity with the tools

  • Time constraints

  • The quality of the end result

  • The costs associated with better quality tools

  • Client/Producer preferences of processing levels and noise levels.

A Bit Of Perspective

Acon Digital DeNoise is part of the Restoration Suite 2

I’m sometimes handed sessions from other people with real time plugins engaged and since I often do my highest quality noise reduction or restoration offline, it can be the case that I feel underwhelmed by the results of real time processing. This is especially true when no attempt has been made to shape the amount of noise reduction through automation of thresholds and reduction faders to keep things sounding consistent. This harkens back to my CAT43 and Dolby 430 days of dynamically altering the amount of NR on a near constant basis.

If I have time, I will undo the real time processing and go with more advanced offline tools where I can go clip by clip, module by module, program by program, until I like what I hear. But if I don’t have time, I usually stick with what I’ve been given, possibly tweaking a bit here and there.

For me, the issues that happen with offline processing are more to do with a lack of mix and story context. For instance, if I can’t see the person is in a forest stepping on a twig, I might remove the snap of the twig, erroneously. I might also remove slightly too much air and find my dialogue sounding more like voicever than dialogue, not sitting in its rightful place, with the right amount of ambience and possibly natural reverberation. Reverb’s easy to add back in but nothing beats the real thing; the same goes for ambience if it’s right for the location.

In the offline workflow model, with the tools I have, I can shape and add loops of ambience and remove just the right amount of reverberation and lip smacks, clicks, pops, buzz, whine, hum and plosives. In a real time workflow, it’s more likely my settings will end up more as a compromise between what mostly works and what mostly doesn’t work but I can get the result quicker and it might not differ much (or at all) from the potentially more precise offline approach.

The Perfect Balance? 

Cedar DNS 8D

If you ask me, the best approach is offline processing, with a bit of online, when needed for finishing. I’ve found that pushing offline noise reduction a little less and then riding a little real time noise reduction allows me to control my noise reduction with finesse and listen in context; reducing a little more, only where needed. 

I tend to use offline processing more than real time plugins just because I have access to higher quality algorithms offline and spectral editing allows me to pick little bits of noise out from a sound, leaving the surrounding content unaffected. While spectral editing, I’m able to employ the widest range of tools to meet the tasked I am reguarly assigned. The costs related to hardware-based noise reduction don’t make a good deal of sense for my home-based studio so I stick with software I know will be refined over successive iterations and are more likely to be present on the systems used by my subcontractors and the studios I serve.

I’ve worked in settings where the offline approach isn’t right, particularly in sports play-by-play recording sessions. Those sessions unfold in real time, for the most part, with very few stops once the mic’s and communications are set. In these cases, if I can’t sort out the noise or unwanted audio before or after the session, I use real time processing and it works well. It’s ideal when the noise reduction is hardware-based on these gigs.

The Conclusion?

This offline versus real time noise reduction debate has been going on for years and for the past little while, it seems the offline processing might narrowly beat out the real time processing workflow in situations where editorial and mixing can proceed at a decent pace. My personal preference would be to one day have the tools to dramatically control noise and unwanted sounds in real time, because this workflow keeps me inside the mix rather than pulling me out into an editor. I consider myself a mixer more than an editor though of course, I edit and mix the majority of my projects. I just haven’t found a tool that can do everything I want to do in real time so a hybrid model–-real time, then offline- offline works best for me. 

Depending on your needs, your situation and the desired end results, any direction might work. What makes someone a great audio engineer is knowing when to choose to do something, nothing, or a little something to make the content shine and the clients happy. The amount of noise reduction applied to any audio says a lot about the person doing the noise reduction. What does your work say about you?

I’ve witnessed people going too far with noise reduction, stretching back to the very beginning of my career. On more than one occasion, mixers had to reprint sections of films or TV shows with their CAT43 scaled back or removed entirely, as it was perceived to be ‘sucking the life out of the track'. I’ve been guilty of this myself at times too, as have many of my peers. So, whether real time or offline, the potential to inflict damage on your project is there. Perhaps the wisest way to reduce noise is in stages, always keeping the original sounds unprocessed for reference and as a point of return in case damage is caused unintentionally. Sometimes the best approach is to do nothing at all.

What Do You Think?

We’d love to know how you approach noise suppression, reduction, and (gulp) elimination. Please feel free to comment.

See this gallery in the original post