Originally Posted by Matt Finley
I’m intrigued by AudioTrack’s question.

Forgetting the exact song for a moment, at what point will software try to use AI to make a guess at what the bass was playing, when the stem separator isn’t sure?
I am not really sure how and if AI could achieve anything. However, presumably it could be used to interpret many other melodic components of the music, including the surrounding song structure, and from that establish what would potentially work along with the stem separation that had been interpreted, including suggested corrections to stem separation where there is a lower degree of confidence. In essence, could it smarten up / value add to the stem separation process?

(Disclaimer: I have no idea if this is really feasible, I was just throwing ideas around, but I think the concept has some merit.)


BIAB & RB2025 Win.(Audiophile), Sonar Platinum, Cakewalk by Bandlab, Izotope Prod.Bundle, Roland RD-1000, Synthogy Ivory, Kontakt, Focusrite 18i20, KetronSD2, NS40M Monitors, Pioneer Active Monitors, AKG K271 Studio H'phones