My take on this is a little less optomistic, though I'm pretty sure it will improve.

The reason I'm less optimistic is that what the software is doing is identifying all the little nuances of the sound and attributing those nuances to a particular sound source, e.g., a bass guitar. How well it isolates depends on how well it does that attribution. It can attibute by frequency, phase, time and articulations. But if any other source supplies nuances that fit close-enough to the bass, nuances will likely be wrongly attributed. A solid electric bass is probably a fairly predictable sound, so relatively easy, but even then, slaps and percussive effects may sometimes be difficult to attribute to bass, rather than percussion. I presently hear quite noticable 'fails' between vocals and strings and synths and whilst I anticipate that will improve, I do wonder how much it realistically can improve, particularly when the audio processing perhaps interferes, e.g., by ducking, chorus and so on.


Jazz relative beginner, starting at a much older age than was helpful.
AVL:MXE Linux; Windows 11
BIAB2025 Audiophile, a bunch of other software.
Kawai MP6, Ui24R, Focusrite Saffire Pro40 and Scarletts
.