Production Expert

View Original

Are Apple SoC Suitable For Audio Production?

We were alerted to a recent claim made in several social media groups that the new Apple Silicon SoC (System on Chip) are unsuitable for use in audio production. Some of our community were concerned by this claim, so we asked an industry expert to tell us if this claim is true or false.

The Claim

The claim made is;

“SoC (System on Chip) setups are unsuitable for Pro Audio applications.

All DAWs need to absolutely know what resources they have on tap. They need to have consistent and reliable access to -

- PCIe bus

- CPU resources

- System RAM

- GPU RAM + processing

- Disk access bandwidth

Constrict any of these or make them dynamic, and there's a whole world of hurt.

With a SoC, all these resources are dynamic. Which causes huge issues. GPU RAM is shared and CPU processing is dynamic across "fast" and "high efficiency" cores. It keeps heat generation down. If you made any of the "fast" cores "exclusive" to give a DAW reliable access to them, the SoC would quickly overheat, resulting in the "throttling" that a lot of M1 and laptop users experience.

It's why Pro Tools has the option to switch off Intel Turbo Boost. It's also why on my Windows system, my overclocking is fixed, not dynamic.

The M1 is much better suited to NLE's, as believe it or not, there's actually far less real time processing going on in an NLE than a DAW.”

We asked Mark Wherry, Director of Music Technology at Remote Control Productions to respond to the claim. Mark is one of the most respected technology experts in the audio industry. Mark has been a contributing writer on music technology with Sound On Sound for over 20 years. He designed and built the bespoke PC-based sampler system used by Remote Control Productions, the home of Hans Zimmer and many other top composers. Over to Mark;

The Facts

“There are a few problems with this perspective because two points are essentially raised and then confusingly conflated. One could write a book in order to fully explain the scope of the assertions here, so I’ll try my best to highlight the most important concepts.

To address the first point regarding System on Chips: it is simply not true that SoCs by definition and design are required to use what are referred to here as “dynamic” resources. A System on a Chip is exactly that: the integration of several system components (such as the CPU, memory and I/O controllers, and so on) in a single chip. Whether these components scale their usage based on other factors, isn’t a prerequisite; rather, it depends upon application.

For example, if we take the first SoC to use an ARM core—the ARM250 in 1992— which was created for Acorn’s desktop computers, this consisted of four components: the CPU and three controllers for memory, IO, and video. The CPU had a fixed clock rate of 12MHz, and, although this was 30 years ago, the important point is that it was an SoC and it didn’t adhere to the criteria mentioned in the original comment.

Combining cores catering for different types of workload (such as performance and efficiency) wasn’t really an issue until ARM formalised an approach with the introduction of the company’s big.LITTLE technology. This allowed for clusters of performance and efficiency cores to be structured in different hierarchies, which has evolved into what ARM refer to as DynamIQ. In terms of allocation, Apple’s ARM license allowed its engineers to implement their own performance controller, assigning workloads to different cores, so that it’s tailored to the specific needs of macOS (or iOS, iPad OS, etc.).

When an audio application needs to best work out how to utilise a system’s performance, performance and efficiency cores can be separately identified and included in the thread pool for real-time audio processing – or not. Given the way an audio engine’s scheduler might be written, it isn’t the case that efficiency cores might reduce the capabilities of the overall system even if they are used for audio processing. However, one approach is to simply allow macOS to use the performance cores for audio processing, whilst leaving efficiency cores for other tasks such as managing the user interface.

I’m not sure where there’s an overarching rule that states making cores exclusive gives reliable access or SoCs would overheat. For one thing, macOS—like many UNIX kernel derivatives—doesn’t let you set thread affinity, which is what would allow for real-time threads to be fixed to specific cores. However, in most modern operating systems, you wouldn’t have to do this because the system would be smart enough to manage this for you – especially for processing that occurs once every 3ms, for example, assuming a buffer size of 256 samples at 44.1kHz. And specifically in macOS 11 and later, Apple’s OS developers developed a method by which an application can inform the OS about threads and audio processing to optimise for exactly these circumstances.

The second point touches on the subject of deterministic performance. A purely HDX-based Pro Tools system, for example, which carries out all signal processing with dedicated DSP chips, would be an example of deterministic performance. If I load four compressor plug-ins in a session where the algorithm uses 5% of a DSP chip’s resources, Pro Tools will (more or less) require 20% of one chip to process these compressors. Every time I load this session, Pro Tools will always use 20% of one chip for those four compressors; and, because the allocation is essentially fixed, I know those plug-ins will always be able to function.

Therefore, if the full session required, say, 90% percent of the available DSP resources, it will require 90% every time and, in an ideal world, will always play back without any issues.

On the other hand, a system that incorporates native signal processing (which is carried out on the resources provided by your computer without using dedicated DSP chips) is inherently non-deterministic no matter whether we’re talking about an SoC or a general-purpose CPU. As an example, if you create a project that plays back at the edge of falling over, it might play back fine at that moment. However, when you reload the Project, the system might not be able to achieve the same performance, especially if you’re running other applications, such as Dolby’s Renderer.

Measuring the usage of native performance is rather tricky, as evidenced by trying to correlate the usage meters provided by a music or audio application with the system metrics displayed in macOS’ Activity Monitor or Windows’ Task Manager. The recent Cubase 12 release, for example, has significant improvements to its system monitoring so that it can more accurately represent what’s happening when playing back a Project.

In concluding it’s worth mentioning a couple of things. Firstly, the idea of integrating different types of processing in a single chip is a direction the entire computer industry is heading, not just Apple. Both AMD and Intel have announced similar heterogenous architectures, with integrated CPU cores and even FPGA functionality. In fact, it’s worth noting that AMD recently closed its acquisition of Xilinx (the company founded by the inventors of FPGAs), and Intel bought Altera (another FPGA developer) many years ago.

Interestingly, the usage of standalone FPGAs (often to replace dedicated DSPs) has been prevalent in audio hardware for decades. Two examples would be RME’s audio hardware, which uses Xilinx FPGAs to implement routing, mixing, and basic effects, and Apogee’s recent audio interfaces that carry out similar duties, but running far more complicated effects, employ FPGAs from Altera.

I think the claim about the M1 being better suited to non-linear editors, such as Final Cut Pro, Premiere Pro, or Media Composer, is baseless. For one thing, there’s a great deal of real-time processing required so an editor can immediately watch an idea without having to wait for offline rendering to occur (which it did in the past). And it’s worth remembering that Avid’s Media Composer originally required accelerator cards for video processing in the same way Pro Tools needed similar cards for audio processing. Besides, there’s a reason Apple put so many GPU cores into its professional machines for customers like those using non-linear editors. It’s doubtful those cores are sitting around with a cup of tea and a cigarette watching Countdown!”

Conclusion

So there you have it, SoC are suitable for audio and as Mark states, the direction of travel for the entire industry is toward SoC. You can invest in this technology without any concerns about their suitability for complex audio tasks. File the original claim under false.

See this content in the original post