Production Expert

View Original

5 Pro Audio Industry Predictions For 2019 - Check Out How Accurate We Were

We get a lot of people asking us what we think is going to be happening in the world of recording during any given year. At the beginning of 2019, we laid out our 5 predictions for the pro audio industry for the year. Before we present our predictions for 2020, in this article we take a look at our predictions for 2019 and see how well we did...

1. Artificial Intelligence Will Impact The Studio Even More

In the last few years we have had a taste of how A.I. (Artificial Intelligence) or Machine Learning, has started to impact what we can do in the pro audio sector as well as in the home. We believed that even though these features and tools from the like of iZotope and LANDR were already making a difference by helping us to do things that were not possible before, that in 2019 we predicted that we would see more and more products and plug-ins harness A.I. to help do tasks that are not possible any other way or tasks that are boring and detract from our creativity.

Apple Get Trademarks Approved For Two AI Related Products

In July 2019 we reported that Apple had been issued 2 trademarks that specifically relate to AI. Not unsurprisingly for Apple, there isn’t a huge amount of information on these two technologies - Create ML and Core ML. We understand that Create ML leverages the machine learning infrastructure built into Apple products like Photos and Siri. This means your image classification and natural language models are smaller and take much less time to train. Once the model performs well enough, it can then be integrated into an app using Core ML.

Alternatively, you can use a wide variety of other machine learning libraries and then use Core ML Tools to convert the model into the Core ML format. Once a model is on a user’s device, you can use Core ML to retrain or fine-tune it on-device, with that user’s data.

iZotope Continues developing Machine Learning-Based Products

During 2019, the king of machine learning in our industry, iZotope released 3 more AI-powered products.

In June 2019, iZotope released Neutron 3 with a new Mix Assistant and Sculptor Module designed to help you improve your mixes.

The headline feature in Neutron 3 came from the continued development by iZotope of their machine learning technology, which is at the heart of the growing number of ‘Assistant’ tools in the iZotope family of products.

iZotope’s new Mix Assistant in Neutron 3 Advanced was the first plug-in that listens to the entire session, by communicating with every track in the mix back to the main Neutron 3 ‘mothership’ plug-in, with the aim of creating a balanced starting point for an initial mix built around a focus chosen by the mixer, all designed to save time and energy for creative mix decisions. The other headline addition in Neutron 3 was a Sculptor module again with machine learning spectral shaping.

iZotope told us “With machine learning, we’re helping everyone get to a great starting point for their mix, so they can stay focused on their creative input. I’m personally very excited to see where this takes music making.”

In October 2019, iZotope released Ozone 9 with new AI-Based features which was designed to bring balance to music with never-before-seen processing for the low end, new real-time instrument separation, and lightning-fast workflows powered by machine learning.

Ozone 9’s improved Master Assistant uses iZotope’s powerful machine learning to create a custom preset in seconds. Master Assistant now Intelligently sets loudness to meet CD or streaming targets, enabling users to prepare a solid master for distribution in seconds.

In November 2019 iZotope released Dialogue Match, which is an AAX AudioSuite plugin that automatically learns and matches the sonic character of dialogue recordings. This is the first product to combine iZotope and Exponential Audio technology, bringing together iZotope’s machine learning with the reverb technology developed by Michael Carnes.

Dialogue Match has been designed so that users can analyze the audio, extract a sonic profile and then apply the profile to any other dialogue track for fast and easy environmental consistency in scene recordings, enabling you to complete the tedious process of matching production dialogue to ADR in seconds, rather than hours.

The EQ Module uses the EQ matching technology from iZotope's Ozone 9 to quickly learn and match the tonal and spectral characteristics of dialogue.

The Reverb Module uses brand-new reverb matching technology, powered by iZotope machine learning, to capture spatial reflections from one recording and accurately apply them to another using Exponential Audio's clean, realistic reverb engine.

The Ambience Module harnesses the Ambience Match technology from iZotope RX and analyzes the spectral noise profile of a recording, identifies and then re-creates room tone so speeding up the matching of dialogue from location recordings, acquired with boom and lavalier mics as well as studio acquired ADR tracks.

AI For ADR Or Should That Be Deep Fake?

During 2019 we learnt about two AI-powered solutions that could be a great help to ADR workflows, noting that both could also be used in Deep-Fake scenarios.

The first came across our desk in August 2019 and presented a technology that would help with ADR by manipulating the video and moving the lip movements to match the audio rather than the more conventional technique of modifying the audio to match the pictures. This technology could also be used with foreign language overdubs by again moving the lips of the original character to make the new language.

The second technology is a development of machine learning-based voices, with the headline claim that this new technology could ‘learn’ someone’s voice with just a 5-second clip. They explain that this can be done because they have trained a neural network, what we often call artificial intelligence or machine learning, on hours and hours of a wide variety of speakers so that it understands how humans speak and then it can take a 5-second clip from an individual it has not heard before and clone a voice and get them to say things that were not in the clip.

Both of these technologies come with huge deep-fake warning signs with ramifications for politicians etc, in fact anyone for whom ‘my word is my bond’ through to court cases etc. Would it be possible to embed some kind of watermark in the synthesised audio that can identify this as AI created? Or does this excite you with the possible solutions in content production?

Machine Learning And Object-Based Audio

Machine Learning has had an impact on object-based audio with the work that the team at Salsa Sound have been working on. In March 2019 we wrote about founders Ben Shirley and Rob Oldfield who have developed a set of tools for automatic mixing which are both channel and object-based. They have focused in on live sport where their machine learning engine will automatically create a mix of the on-pitch sounds without any additional equipment, services or human input – freeing the sound supervisors up to be able to create better mixes. Rob Oldfield explains…

“Our solutions not only create a mix for a channel-based world, but also allow for the individual objects to be broadcast separately with accompanying metadata from our optimised triangulation procedure which places all of the sounds in 3D space – even in a high noise environment – which helps facilitate immersive and interactive applications.”

What they have been able to do with machine learning is a two-fold solution. Based on machine learning, they have been able to identify where the ball is on the pitch and to automate the mixing of all the field mics. Secondly, the machine learning technology has been taught to not only identify the ball but how hard is being kicked and to do automated ball kick foley on the fly, at last giving us the impact that we have been struggling to achieve.


2. A New Mac Pro

We predicted that 2019 would be the year of the long-awaited new ‘modular design’ Mac Pro. At that time we knew that Apple was focused on a Mac Pro with a modular and upgradeable design. in the press release announcing the iMac Pro they said…

“In addition to the new iMac Pro, Apple is working on a completely redesigned, next-generation Mac Pro architected for pro customers who need the highest performance, high-throughput system in a modular, upgradeable design”

As far as modular was concerned Mike stated: “that we can safely say that it will NOT have PCI-e slots”. What we knew for sure was what Apple's Senior Vice President of Worldwide Marketing Phil Schiller said…

"We have a team working hard on it right now. We want to architect it so that we can keep it fresh with regular improvements, and we're committed to making it our highest-end, high-throughput desktop system, designed for our demanding pro customers."

They went on to say that the new Mac Pro would be designed to handle VR and high-end cinema production. As to form factor, some speculated that the Mac Pro 7,1 will be different to the rest of the Mac range and will have slots for things like new graphics cards and slots for upgradeable internal storage, with the Mac Pro 2019 being heavily engineered and priced accordingly. Otherwise, they speculate, why even bother making it?

As to when it would be announced we looked at when the trash can and old-style cheese-grater were announced and they were both announced at the Apple Worldwide Developers Conference (WWDC) and we predicted that Apple would do the same with the new 7,1 Mac Pro.

When it came to price we had even less to go on at the beginning of 2019. We considered the pricing of the 2018 MBP and iMac Pro as well as the price of the Mac Pro trash can and that it wouldn’t be less than $4K and if it was going to be as different as some people suggested, we may be asked to pay a premium for the modular, upgradeable design.

Well, we were correct with the announcement, Apple announced the new Mac Pro 7,1 at their June 2019 WWDC. However, what we weren’t expecting were PCI-e slots that would support up to 6 Pro Tools HDX cards and that a working system would be on demo at WWDC, with Avid clearly having been working very closely with Apple on the development of this new modular 2019 Mac Pro, that was a tower and a cheese-grater.

You can read the feature summary from the announcement at June’s WWDC in our article Apple Announce New Mac Pro With 8 Internal PCI Card Slots That Will Support 6 Pro Tools HDX Cards And Matching 6K Monitor Screen. Avid’s director of Product Management Francois Quereuil told us…

“Avid’s Pro Tools team is blown away by the unprecedented processing power of the new Mac Pro, and thanks to its internal expansion capabilities, up to six Pro Tools HDX cards can be installed within the system – a first for Avid’s flagship audio workstation. We’re now able to deliver never-before-seen performance and capabilities for audio production in a single system and deliver a platform that professional users in music and post have been eagerly awaiting.”

A couple of weeks later we published an article Buying The Apple Mac Pro 2019 For Pro Tools - Read This Article First And See How Much It Will Cost You To Build A Powerful Mac Pro in which we predicted the costs of the different models because all Apple had announced was the price of the base model at $5,999, a little higher than the $4k, we had given as our view on the lowest price before we knew about the design and form factor. So how close were we? In fact, our predicted prices were not far off when Apple released the 2019 Mac Pro on December 10th. Check out article Apple Release 2019 Mac Pro - We Now Know The Pricing Options - Check Out What A DAW Computer Will Cost to see how close we got with the detail and the pricing but we got it pretty close!


3. More Smart Speakers Using DSP

In 2019 we predicted that we would see more ‘smart’ monitor speaker systems using inbuilt DSP to improve the sound.

Kii Three

We had already seen the release of a number of systems, such as the Kii Three Monitors, Check out our article when we asked Do They Really Live Up To The Hype? Julian said…

“They sound jaw-droppingly amazing, the full range directional behaviour which minimises the effect of the room significantly and the optional monitor controller is a very clever device.”

But what about developments in 2019? In July we asked Do You Need DSP To Get A Great Sound? In this article, we explained that designing good analogue crossovers isn’t easy. Components affect each other in complex ways and the tolerances of analogue components vary, components with tight tolerances are more expensive and there is a practical limit to the steepness of the filters which can be achieved using analogue technology. DSP can offer several tangible benefits straight away by way of repeatable performance. As a digital process, the issues around component tolerances don’t apply and the cost burden of expensive components with tight tolerances disappear.

However, the advantages of DSP go further than just doing what analogue filters can do more accurately for less money, Using DSP it is possible to construct phase linear filters, something which isn’t possible in the analogue domain.

HEDD Lineariser plugin

An example of this is the HEDD lineariser plug-in which performs exactly the kind of phase correction a linear phase filter would achieve but performs it natively, in the host computer using native processing from a general-purpose microprocessor - so not using a DSP processor and not performing the crossover filtering. Why do this? It’s an original take on an old problem. One benefit is that it saves on two conversion processes: one A/D and another D/A as the signal is digitised before entering the DSP and then converted back to analogue before being amplified.

Eve Audio SC207

Another example of using DSP in speakers is the Eve Audio SC207. EVE’s intention with their DSP is to present high-quality processing which can tailor the sound of their monitors to a room in a familiar, analogue presentation without the users needing knowledge of acoustics, just a pair of ears.

The DSP follows a 192kHz/24bit A/D converter and presents the user with control over a set of filters which allow the user to compensate for the influence of the speaker placement in the room and listening distance.

Neumann KH80

The Neumann KH80s are small in size but their use of DSP is interesting as it illustrates both the benefits and the potential pitfalls of DSP. We mentioned at the beginning of this piece that DSP offers greater flexibility in filter design than can be achieved in the analogue domain. Infinite Impulse Response filters share the same characteristics as analogue filters but can be much steeper. The KH80 DSP uses 48dB/Oct filters in the crossover with a linear phase response, something which isn’t possible without the use of DSP.

In the same way as the EVE monitors, they offer easy to operate filters for basic correction to compensate for placement, boundary effects and desk reflections but they go further by offering more sophisticated “guesstimate” filtering based on the results of guided questions via an app about the room and the speakers placement in the room. This allows the monitors to be set up specifically for the room they are in without any measurements being involved.

As we’re starting to see, DSP has the answer, the difficult bit is asking the question accurately enough. If you give enough information about your environment then given enough DSP processing power there is an awful lot DSP processing can do to help. However acoustic measurement is complicated and the quality of results is dictated by the quality of the measurements. There are a few very comprehensive solutions available which can be used to make acoustic measurements.

Genelec W371A Adaptive Woofer System With 8341, 8351 And 8361 Monitors

The Genelec GML system offers an extremely powerful set of proprietary tools for setting up Genelec DSP monitors. The W371A Adaptive Woofer System is a unique concept, designed specifically to seamlessly complement the 8341, 8351 and 8361 monitors, and in conjunction with these models creates a series of full-range monitoring solutions with unrivalled neutrality and supreme levels of control over directivity and the effects of room acoustics. Its unique dual woofer system and DSP makes it the perfect complement to the ‘The Ones’ range.

When used with Genelec’s GLM software and a “The Ones” series monitor, the W371A becomes a clever room detection system in itself. This is facilitated by the revolutionary dual woofer design - one front-firing sealed unit and one rear-firing ported unit, each with individual DSP processing.

GLM uses the results from both drivers independently to work out the actual location of the W371 within the room, then tailors the response of both units to time-align and create a virtually flat bass response, with excellent directivity and localisation, linking into the DSP within the “The Ones" monitor above it, to create incredible imaging and frequency response, and to control bass nodes within the listening area. With this system, you can experience unparalleled localisation of bass signals.

Moving Speaker Calibration From Computer To Monitor DSP

In January IK Multimedia announced their new iLoud MTM compact studio monitors that IK say “were designed to deliver pristine sound and an unprecedented level of accuracy that redefines nearfield monitoring for the modern, computer-based, professional and home studio.”

What’s best about these monitors is that have moved their ARC speaker calibration software out of the computer and onto a DSP chip inside the monitor speaker.

iLoud MTM uses audiophile-grade, high-resolution algorithms and top-of-the-line A/D converters to carefully manage cross-overs, filtering, time alignment, equalization, dynamics control and auto-calibration. Totally DSP-controlled and precision time-aligned, iLoud MTM also manages the off-axis response to minimize the effects of room reverberation. The advanced onboard processing is the product of IK's over twenty years of experience in digital signal processing, helping iLoud MTM deliver sound far beyond anything in its size or class.

DAD SPQ Card

But the DSP doesn’t have to be in the speaker. With the DAD SPQ card for the AX32 and Avid MTRX it can be in your DAW’s interface, which provides up to 16 IIR filters per channel across 128 channels at 48KHz and can work to a maximum sample rate of 384KHz. It provides a total of 1024 filters to be allocated across the 128 channels as appropriate. This is a very serious DSP solution, but that is to be expected as it is likely to be used in a surround monitoring environment and quite possibly a Dolby Atmos system. To provide a DSP solution to calibrate a system with this many channels requires much more DSP and far more individual filters than would be necessary for a stereo monitoring system.


4. More Smart Virtual Instruments

We predicted that 2019 would see a rise in ‘smart instruments’. Tools that would go beyond samplers and using new technologies like machine learning or high-speed processing help to humanise virtual instruments.

Most of us are used to products like EZ Drummer and EZ Keys but brands like UJAM are now offering this in the form of Virtual Guitarist and Virtual Bassists. But what would we see in 2019?

In January 2019, Toontrack announced EZBass at the Winter NAMM show as part of their 20th-anniversary celebrations.

Toontrack presented EZbass as a groundbreaking new instrument that goes above and beyond a traditional “bass sample library.”

“In 2019, Toontrack is proud to introduce the market’s first bass software of its kind – one that focuses not only on pristine sound but also on fundamental features for effortlessly letting you add bass to your songs. For those familiar with Toontrack’s products, EZbass is probably best described as the bass equivalent of EZdrummer 2.”

At NAMM they suggested that EZBass might be released in the last quarter of 2019, but in November they announced it would be released in 2020. Erik Phersson, Toontrack’s Head of Software Development told us…

“In 2020, Toontrack finally adds the much-anticipated EZbass. This is shaping up to be a product that really adapts the same ‘EZ’ philosophy as the other products in the line. It’s the perfect complement to EZdrummer 2, EZkeys and EZmix 2.”

Moving onto April and we saw a very clever device still in prototype stage at the AES Convention in Dublin. Second Sound was in the Product Showcase section demonstrating a prototype board which had a chip on it that converts analog audio into pitch and envelope CV signals to control analog synths or convert the pitch information via a digital bitstream for MIDI pitch on/off and pitch bend commands.

They have been able to develop processing that can do all of this in realtime, with the addition of a simple microcontroller, produce an audio-to-MIDI solution in real-time with very high accuracy and negligible latency. Mike was blown away by the demos of a bass guitarist being able to play a range of devices, with perhaps the most impressive demo being the bass guitarist ‘playing’ a strings synth and being able to bend the notes and play a very convincing strings line using a bass guitar.

Second Sound was also at AES in Dublin to present a paper on the fundamental frequency tracking circuit used in the chip as part of the Music Information Retrieval session as well as looking for leads to find manufacturing partners who would like to design products using their new technology.

Then in October Accosonus released Rhythmiq, a new plug-in instrument they claim has the power to easily adapt drum beats and loops into new creative performances. Many artists and songwriters use drum loops and beats as the foundations from which to build their new productions as samples are very easy to work with. The problem is that loops are static, which can limit creative choices. Enter Rhythmiq, designed so you can use to quickly separate the main kick, snare and hi-hat/ride elements from drum loops. Then have control over each main component in a beat enabling you to regroove either the whole performance or each individual element and rebalance all on the fly.

Rhythmiq isn’t so much a processing plug-in, it’s very much a performance instrument. Every control can easily be assigned via MIDI learn. If you own a MIDI controller, we suggest you spend a little time mapping the controls to taste. This will help you to immerse yourself, creatively speaking, in the world of re-performing drum loops in Rhythmiq.

At the time we looked at it Rhymthiq was only AU and VST compatible but understood AAX support ito be coming in the future. Pro Tools users can always use plug-ins such as Blue Cat Audio’s Patchwork which enables users to load AU and VST plug-ins in an AAX environment.

So although there have been some announcements for ‘smart instruments during 2019, 2 out of the 3 are not yet products we can buy, so perhaps with hindsight, we were a little ahead of the curve on this prediction.


5. Object Based Audio Delivery Formats Uptake Will Increase Significantly

We predicted that 2019 would see a major uptake in consumers using spatial audio formats based on object audio like Dolby Atmos and DTS:X.

Other surround formats like 5.1 didn’t take off as a consumer format so you might be wondering why we are predicting the take-off of formats like Dolby Atmos in the consumer audio sector.

3D Soundbars

Following Fraunhofer's successful effort of demonstrating a "3D Soundbar" prototype, in 2019 we have seen an explosion of Doly Atmos soundbars as well as the ability to configure smart speakers into an immersive home-based experience.

In January we reported on the first TV from Panasonic with built-in upward-firing speakers for Dolby Atmos. Hidden from sight when in front of the screen, there are two speakers inside this new TV built into the top of a column that runs up the TV’s rear, which fires the height channels up so that the sound will bounce off the ceiling and give the consumer height based immersive sound.

The concept took the idea of Dolby Atmos soundbars and moved it on by integrating Dolby Atmos into the TV and the sound features don’t stop there. The upward-firing speakers are joined by one woofer at the bottom of the back panel and a large front-facing soundbar built into the screen’s bottom edge covering the left, centre and right channels.

Another announcement made at CES2019 was Samsung heralded a partnership with Apple to have an Apple TV app in their smart TVs that will mean users won’t need to buy an Apple TV hardware unit. There is a dedicated app for iTunes which gives access to users' movies and TV shows that they have purchased through Apple's service. The app appears with the other apps in a carousel along with other services including Netflix, YouTube, and Amazon Prime Video.

Content Is Key And Not Just Films

This continued to be a key area in 2019, with a number of record companies like Warner Bros and Universal Music working in conjunction with Dolby to release significant amounts of music in Dolby Atmos with the aim of encouraging the takeup of Dolby Atmos at home. At this year’s AES show in New York, we were privileged to enjoy a private demo at Dolby’s Soho facility in New York at which Mike was able to experience some examples of tracks from major artists that are ben both remixed and developed from scratch in Dolby Atmos. They also had a number of demo rooms displaying examples of home-based Atmos compatible systems including systems built from smart speakers.

Add this to moves by Netflix and Amazon to increase the amount of content being creating in Dolby Atmos and you start to see a momentum growing. Our own Alan Sallabank says that “Dolby Atmos has become the default delivery format now”. For example, he is about to work on a project intended primarily to be viewed on mobile devices but is being mixed in HE Atmos. this demonstrates a strong plus for Dolby Atmos in that the local hardware will render out a version of the original mix suitable for the system. So we can now use the same base content and then the consumer’s equipment will output a version suitable for that setup. In this case, producing a binaural mix from the Dolby Atmos mix for use with earbuds on smartphones and tablets.

TIDAL Announce Dolby Atmos Support On Their Platform

In December, streaming platform, TIDAL announced support for Dolby Atmos to their TIDAL HIFI Platform. This means you can now listen to music in Dolby on your Dolby Atmos compatible Android smartphone or tablet. Dolby says…

“Dolby Atmos Music allows you to feel more clarity, detail, and depth in every song. Go beyond just hearing music – with Dolby, you’re put inside the song in a whole new way and can experience the artist’s vision without compromise.”

Pro Tools 2019.10 And The New 130 Channel Dolby audio Bridge

Of course, AES was when Avid announced Pro Tools 2019.10 with the Dolby Audio Bridge. Pro Tools 2019.10 dramatically improved Dolby Atmos ‘in the box’ mixing workflows and delivery of multiple mixes in a single file. With full Core Audio support of the Dolby Audio Bridge, users can now send 130 channels from Pro Tools (up from 32) to the Dolby Atmos Renderer, simplifying Dolby Atmos ‘in the box’ mixing and playback workflows with Pro Tools | HDX and other Core Audio devices. The Dolby Audio Bridge is a major improvement over the 32 channels what was all that was possible before and also replaces the previous Send and Return plugin workflow.

Pro Tools HDX users can now send to both the HDX busses and outputs whilst they simultaneously send the Core audio paths into either the Dolby Atmos Production Suite or Mastering Suite because the Dolby Audio Bridge can be selected and used as a playback device and then routed on to the HDX paths in the Dolby Atmos software. This has been designed to drastically reduce session complexity with routing. It will also help to eliminate the tedious managing of delay compensation. It also means that users will be able to set up a session created with the Dolby Atmos Production Suite in Pro Tools Ultimate in exactly the same way as to how their session would be set up using the Cinema Renderer. As a great bonus, it’s worth noting that HDX users can relish in the fact that HDX is now fully compatible with the Dolby Audio Bridge, which can be selected and used as a playback device.

New Dolby Atmos Netflix Guidance

Following the release of Pro Tools 2019.10 at the end of October, Netflix very quickly announced an update to their guidance on Dolby Atmos workflows including publishing their own step-by-step guide on setting up and using the new Dolby Audio Bridge feature added by Avid in Pro Tools 2019.10.

In addition, Netflix has stated that it is not necessary to have Dolby Atmos certification to work on Netflix titles. Scott Kramer (Manager, Sound Technology, Creative Technologies & Infrastructure at Netflix) also let us know that they have published an article with step-by-step instructions on how to set up and use the new Dolby Audio Bridge introduced in Pro Tools 2019.10 together with the Dolby Atmos Production Suite to be able to create Dolby Atmos content.

Not Just About Dolby Atmos

At AES in Dublin in April, Mike learnt about 3 case studies of applications for object-based audio presented at the Dublin AES Convention from recent live events where engineers from the EBU, Fraunhofer and Dolby experimented with delivering object-based audio to the end-user.

  1. Roland Garros French Open Tennis tournament used NGA object-based audio to feed 7 simultaneous versions at the same time.

  2. Eurovision Song Contest, where NGA was tested to offer multiple languages and musical mix versions to the consumer.

  3. European Athletics Championship, where a single NGA production mixed channel-based, scene-based and object-based sources to feed 3 different codec technologies.

It was very interesting to see and hear the demonstrations and experiences gained from these 3 experiments and how they all show how MPEG-H and AC4 can deliver both immersive sound and separate audio objects to provide commentary and audio description in multiple languages.

In the article Object Based Audio Can Do So Much More Than Just Dolby Atmos? We Explore in March 2019, we looked at the work that Lauren Ward, a Postgraduate Audio Engineering Researcher, with a passion for Broadcast Accessibility from Salford University. Lauren’s research has been looking at a methodology whereby different audio objects in a piece of content, are scored for how important each object is to the narrative. If an object is essential to the story, like the dialog, or a door opening, they are scored as essential. Other sounds like ambiences and music that add to the narrative but if they weren’t there you would still be able to follow the story are scored progressively less.

Then there is a single control that you can adjust from a full normal mix through to an essential only mix for the very hard of hearing. I have had a chance to try this out on a visit to Salford University and found it very simple and intuitive and the process of scoring of the objects would be very easy to do during the production process.

In August 2019 we posted an update to Lauren’s work as part of our article Have The Loudness Normalisation Specs Trashed Dialog Intelligibility? - We Investigate. Lauren’s research had moved on with a public-beta experiment, here in the UK. This experiment took a recent episode of BBC One TV medical drama ‘Casualty’ and presented a version of it on the BBC website that included a slider button in addition to the volume control. Keeping this additional slider on the right-hand side retained the standard audio mix. Moving the slider to the left progressively reduces background noise, including music, making the dialogue crisper. There is a short clip in our article, which demonstrates how effective this is and it all uses object-based audio

Another feature introduced in Pro Tools 2019.10 was Multi-Stem Bounce in a Single File. With this new feature introduced in Pro Tools 2019.10, you will be able to export multiple stems into a single Wave file. You will be able to do this by selecting the file type as BWF (.WAV), Interleaved as the File Format, and then when using the Multiple Bounce Source stems option, a new Delivery Format selector will pop up. The resulting file will have each stem in WavEXT channel order, with an additional iXML stem/channel definition embedded in the file. When Pro Tools imports one of these files, it will stay as a single interleaved file, as long as the channel count is below 32, otherwise, it will be split into mono files. In all cases, the stems will be represented in the clip list as their own autonomous multi-channel clip, just as if they were each their own file.


4.5 Out Of 5 Is Not Bad

We absolutely nailed 4 of the predictions with the only one we didn’t completely get right being the Smart Instruments, but it looks as if we were too early with that one and that 2020 will see the growth of smart instruments.

Watch out for our 5 predictions for 2020 very early in the new year.

See this content in the original post