Production Expert

View Original

EQ Phase And Spill - Understanding How It Affects Your Mix

In this video Julian Rodgers demonstrates how most EQs alter more than just how loud different frequencies are in a signal, they also affect when different frequencies occur because of phase shift.

When we mix there’s usually a desire to get greater control. In the pursuit of this control we tend to introduce more tracks, use more mics and apply more processing to those new tracks, all with the intention of eliminating unwanted aspects of our mix. This is a good strategy but it does have its limits and sometimes we overlook some of the unintended consequences. A good example is in our use of EQ

See this content in the original post

Tracks Without Bleed

Our ears aren’t very sensitive to phase. If you wire a mono speaker the wrong way round so that the driver moves in when it should be moving out you’re unlikely to notice. If however you wire one of a pair of speakers out of phase you’re much more likely to notice as we are sensitive to a difference in phase between two related (or “correlated” signals). Phase shift in the case of overdubs where there is no common information between one track and another usually isn’t an issue.

Tracks With Bleed

This isn’t the case with tracks which do contain information which is common to each other. A great example is multiple mics on the same source whether these are mics which “belong” together like mics on a drum kit or guitar cabinet, or spill between players in the same room such as vocal bleed onto an acoustic guitar mic. In this case as long as the common audio is “correlated” (we won’t get into exactly what that means here but as a rule of thumb, if the bleed is from far away it is likely to be uncorrelated) then if a phase shift is created on one route that common audio has to your ears but not on the others then it can cause an unintended change in the sound.

Whether or not that matters depends on whether or not you like the sound. In sound creation, as opposed to reproduction, there aren’t any rules. However if a change is being unintentionally created I’d rather know about it.

EQ At The Summing Point

The simplest fix for this is to place your EQ at a point where all the various routes a sound can take to your ears are all affected. In the case of a snare drum that might be at the drum submix so the snare sound captured by the overheads, the hi hat mic and all the other routes that sound can take are all treated in the same way and subject to the same phase shift.

When Is This Good And When Isn’t It Good

If the change you are trying to make using an EQ can be applied at the “top” level like this without any negative consequence then its probably a case of “simplest is best”. If however you want to boost 300Hz in the snare but don’t want to boost the same frequencies in the kick, which might sound boxy, then EQ applied to each track is the right way to go. This is after all the point of recording to separate tracks!

EQs with non-linear phase can sound good!

Phase And EQ

We all understand that filters change the level of different frequencies relative to each other. What is less talked about is that different frequencies pass through filters at slightly different speeds. This delay is known as phase shift. It’s like delay but “delay” as we recognise it from delay plug-ins affects all frequencies the same. Phase shift varies with frequency.

Conventional filters cause phase shift. Different filters cause different amounts and there is a type of filter which causes no phase shift at all but to do so it has to cause a delay. These “phase linear” filters are available in some EQ plug-ins such as the excellent FabFilter Pro Q 3 used in this demonstration. They offer a solution to this issue of unintentional phase shift. Just remember it incurs latency and this latency isn’t because your plugin or your computer aren’t up to the job, this latency is there as a consequence of the linear phase response. Another charateristic of linear phase filters is an effect called ‘pre-ringing’. A phenomenon where ripples in the response of the filter cause a backwards ‘sucking sound’ before the sound occurs. This is most noticeable on transients. In a well designed EQ this is a subtle effect but it still worth being aware of. Latency isn’t the only downside of linear phase EQ but it is definitely the one you’ll notice most.

Does It Matter?

Records have been being made perfectly successfully for decades without people worrying about the phase response of their EQ. For most of the history of recorded music there was no point worrying about it because there was no such thing as a linear phase filter anyway. The important point here is to draw a distinction between music creation and music reproduction.

Creation Vs Reproduction

When music is being created there is no objectively “correct” sound. A guitar amplifier is objectively terrible if you look at its frequency response but if the player likes the sound when they play a guitar through it then it’s a good amplifier. In the same way if an engineer engages an EQ and gets a sound they consider “good” then that’s it. Others may disagree but there is no external standard against which to judge it.

The same can’t be said of reproduction. When a subjective judgement has been made that this particular recording should sound a specific way then if a filter changes that sound in an unintended way then that is less acceptable. This is where linear phase filters are most useful and most common. Their use in speaker crossovers is difficult to argue with. The job of a crossover is to split the frequencies between bands and not to impart a sound at all. The use of phase linear filters at the mastering stage of music production is similarly understandable because the mix engineer had already made the judgement about what the recording should sound like. Therefore tools which can make very specific changes without affecting anything else are exactly what is appropriate.

So does this stuff matter for music production? Usually no, but it is worth being aware of and considering spill and correlation. As DAWs have given us forensic control over the timing of audio, with tools like Sound Radix’s Auto Align becoming very popular. If people are paying such close attention to how timing affects phase relationships, shouldn’t they also be paying close attention to how EQ affects phase which after all is still timing?

See this gallery in the original post