I was struck by these two bits from JOM's "M.2 and PCIE" thread:
Originally Posted By: Simon - PG Music
Originally Posted By: Mike Halloran
Fortunately, if doing audio, you don’t need a very fast computer.
Indeed. I regularly use a 10-year old laptop that has a 2nd gen i5, 16gb of ram, and a 500gb Sata SSD. No problems whatsoever.

... another thread about whether Core-i9s were worth the extra, and my own relatively modest machines (the most powerful is an i5-7400 with 8GB of RAM and conventional hard drives.

And I wondered what really are the best optimisations.

There are lots of layers to the question, which probably makes the answers to the question many and varied, and dependent upon what software is in use and what's being done, like off-line production or live-performance.

My thinking goes like this:

Any application using samples should, whilst feasible, be loading those samples into RAM, so a generous amount of RAM seems pretty much de rigueur. CPU power is power is almost certainly not an issue. I/O throughput from bulk storage might be of interest, though.

Any application using modelling to create the sounds will have a modest RAM footprint, but will want a generous amount of CPU.

Applications that do extensive signal processing, ditto.

As a general rule, more CPU-cores should be beneficial, though the applications must use them sensibly to get benefit. I/O bottlenecks to the CPUs and to the real world may still be a major bottleneck on real throughput.

In these days of mixed core types, e.g., performance-oriented and efficiency-oriented, which type(s) are the most valuable for audio?

Is overclocking/turbo-mode/whatever really of value for audio, or does it really only increase the power consumption, heat and fan noise?

I suspect the main contribution to performance of SSDs, of whatever type, will be in reducing load time to RAM and probably doesn't have much, if any, beneficial impact on actual audio processing.

Latency has numerous factors ... time to digitise and de-digitise, time to transfer data to/from the I/O device(s), number of devices demanding data be moved, including non-audio devices like SSDs, Ethernet, graphics, and the like, time to gain access to a CPU, processing time within the CPU, DMA channels/lanes/streams/whatever-you-call-them.

My own main understanding of those things comes much more from work with embedded ARM processors than with x86-derived processors. I imagine that the latter have, for some time, all come with hardware floating-point engines. On an embedded ARM, I would carefully select which core to use for which task and lock the code and data to that core. I imagine the options are rather more limited with PCs if only because the number and type of cores are a moving feast.


Jazz relative beginner, starting at a much older age than was helpful.
AVL:MXE Linux; Windows 11
BIAB2025 Audiophile, a bunch of other software.
Kawai MP6, Ui24R, Focusrite Saffire Pro40 and Scarletts
.