Production Expert

View Original

7 Audio Rendering Tricks You Should Check Out

Audio rendering may seem like a basic DAW function. These days, every DAW features the ability to instantly render software instruments and effects-processed tracks as audio, either through a single-click ’freeze/flatten’ function or a more immediately permanent ‘render/bounce in place’ menu option. The most obvious use case for this is to allow the rendered plugins to be unloaded, thereby saving on system resource usage, but there are plenty of other good reasons to capture your synths and effected tracks as discrete audio clips, too…

Close To The Edit

Need to make a quick track edit in preparation for a DJ set or to serve the demands of a client? Don’t overcomplicate the process by doing it in the multitrack DAW project – just render the master bus as a stereo mix and get the job nailed in minutes using any basic audio editor. As long as all you’re doing is cutting and moving sections around, and applying master bus effects (EQ, filter sweeps and time-stretching, for example), you’ll be more than adequately equipped. And even if you end up deciding you actually need to make your edit in the DAW, working rough ideas up with a stereo bounce first could still save you time.

Slice And Dice 1

One of the most creative applications of on-the-spot rendering is the slicing and rearrangement of synth (or other instrument/vocal) sounds that change over time. Take a heavily modulated pad or sequenced bass, lead or chord patch, render it as audio, then either slice it in a sampler using a ‘fixed length’ algorithm (16th-notes, for example), or chop it up on an audio track. With that done, trigger the slices in the sampler via MIDI, or rearrange and copy slices around on the audio track to turn the original synth line into something completely different. The sudden changes in modulation, and interrupted delay and reverb tails between slices will yield sequences and rhythm beds that you simply couldn’t create any other way. You’re likely to get a few pops and clicks at the starts and ends of slices, but those are easily alleviated using short fades.

Slice And Dice 2

Expanding on that last idea, elevate your sliced-up synth lines to Skrillex levels of mayhem by taking the aforementioned synth pad or sequence and rendering multiple versions of it, making substantial changes to the modulation and effects in the patch prior to each render. Then slice them all up and ’comp’ them together as described above, drawing on the broader palette of variations to build even more dynamic sequences. If the differences between each render are overly extreme, experiment with extending and crossfading slices to blend the transitions as appropriate.

Real-time Rendering

OK, so this one could also be referred to as ‘recording’, but the next time you come up with a groovy synth riff, why not commit to the sound of the patch and pattern right from the off by by capturing it as audio rather than MIDI? This can be a great way to keep the creative process moving forward, and if your performance includes on-the-fly synthesis parameter tweaks, there are bound to be a few unusual accidental modulations in there of the kind that you’d never come up with using an LFO or envelope, ripe for extraction and redeployment in a sampler or on-track.

Resampling: Pile It On!

Go beyond basic rendering and enter the realm of resampling (in the contemporary sense of the word) by repeatedly processing and bouncing a synth part (or other recording) to apply a series of consecutive transformations. For example, upward pitchshifting on the first render, followed by analogue-style distortion on the second, then pitchshifting back down, then timestretching, reversal, reverb, re-reversal, timestretching with a different algorithm, bitcrushing, etc, etc, ad nauseum. With the processes stacking up cumulatively, the order in which they’re applied will play a major role in defining the texture and character of the sound, most notably in the way timestretching and pitchshifting artefacts interact with rendered effects plugins.

Beats: Audio vs MIDI

We’ve yet to hear definitive proof of it ourselves, but quite a few reputable dance music producers insist that audio inevitably delivers tighter timing than MIDI, and consequently wouldn’t ever head into the mix stage without first rendering all their programmed drum tracks in place. It’s a controversial suggestion, but if you want to absolutely ensure that your kicks and snares are hitting exactly where they’re supposed to, there’s no harm in taking the same precautionary course of action.

One for the archives

Since you’re now bouncing all your drum tracks prior to mixing, you might as well go the whole hog and render everything else in your projects as audio, too. This is actually good practise for a couple of reasons beyond just taking the strain off your CPU. First, converting virtual instrument tracks to audio for mixing kills the temptation to fiddle endlessly with sounds that you should have largely settled on by that point in the production process. And second, rendering every channel dry (with faders at unity) and/or ’as mixed’ at the very end of a project creates a future-proof archive of it that you can return to for remixing years later, without worrying about plugin obsolescence or compatibility issues.

We’ll look closer at the art of resampling in a future post, but for now, give us your thoughts on audio rendering in the comments.

See this content in the original post