The way I'm thinking about using AI is that it might be able to work much more closely to how a human thinks and acts.

A skilled human could more easily hear through the surrounding noise clutter and determine that the bass note was obviously (for example) an Eb. Whereas using software or hardware to identify the required separation will be often clouded by all of those adjacent articulations and intonations, including the undesirable sonic variances that are present in lower resolution recordings.

But, bring AI into the picture, and you've introduced a more superior 'human-like' intelligent thinking into interpreting exactly what that note is, the same way that a skilled musician's ear could achieve a better result than some software separation applications might.

If a skilled musician could determine that the note is definitely an Eb, then I'm convinced that AI could deliver the same or better.

Just my thoughts on other ways to achieve a successful result. Perhaps premature at this stage, but give it time... it will happen.


BIAB & RB2025 Win.(Audiophile), Sonar Platinum, Cakewalk by Bandlab, Izotope Prod.Bundle, Roland RD-1000, Synthogy Ivory, Kontakt, Focusrite 18i20, KetronSD2, NS40M Monitors, Pioneer Active Monitors, AKG K271 Studio H'phones