Originally Posted by Simon - PG Music
Use any DAW. Load the original bass on one track and the separated bass on another, making sure they're both aligned. Using a plugin or tool of some sort, invert the polarity of ONE track. Then play back both tracks together and you'll hear the difference. If you then render the output you can then see the difference.
Well, I have an answer and my engineering instincts once again did not fail me.

As suggest by Simon I inverted the polarity and can clearly hear a muffled but audible difference when non-inverted and inverted are played at the same time.
Definitions:
"Bass Track A" is the separated bass from a popular song from the 70s. The source audio is an MP3 file.
"Bass Track B" is "Bass Track A" with significant white noise added.
"Bass Track C" is the bass separated from Bass Track B (which contains no audible noise).
"Bass Track D" is the inversion of Bass Track C (and when played sounds just like Bass Track C)

Test Results:
1. The first test I ran was to ensure that the inversion took place properly. So I played C and D at the same time and got no audio out. This makes sense as they cancel one another out.
2. Then I played A and D and the result is an audible, distorted and muffled bassline that I could easily identify as the bassline to this song. I interpret what I hear when these tracks are played as error (albeit small error) produced by the separation algorithm.

Conclusion
1. Based on this one "simple" test, there is in fact distortion and artifacts generated from the Studio One stem separation process. This makes sense because an AI trained algorithm is generalizing how to extract bass from a finite number training examples. We can't expect it to be mathematically perfect.
2. Bass Track C contains distortion and/or artifacts even though someone with good hearing can't hear them.

But from a practical standpoint the separations are so good, especially when mixed with other tracks that I (and I'm sure others) can't tell the difference.
The other point worth mentioning is that this was somewhat of a best case (low stress) case because white noise is sonically much different from bass guitar and therefore should be relatively easy to separate from bass. On the other hand, songs with bass guitar, low-end keyboard, etc would be more difficult for the algorithm. But even in such cases it does a good job for my ears.

Here is an interesting spin-off question. At what point is the error (observed and described above) deemed unacceptable to the developers of the algorithm? I'd think this would be an important question for those releasing product containing stem separation capability. Presonus obviously got it right.

Simon, thanks for mentioning inversion, I wouldn't have known Studio One could do this without your idea.


https://soundcloud.com/user-646279677
BiaB 2025 Windows
For me there’s no better place in the band than to have one leg in the harmony world and the other in the percussive. Thank you Paul Tutmarc and Leo Fender.