Originally Posted by Gordon Scott
How far it could go with other sound sources is another question. If one extracts the voice, translates that to MIDI . . .
Hmmm, now there's some unique thinking. My thought on this from the beginning was to keep everything in audio format, like how SongMaster does it. But if the separated stem were to be pushed into MIDI, now you have a much cleaner, note-by-note representation of the bass . . . added complexity in doing it but a much cleaner output.

Lately I've been thinking about a more hybrid approach. Rather than have an AI do everything from beginning to end, why not have a human in the loop? As I mentioned, SongMaster produced a bass stem with a gaping hole in the middle of the song. What if there was a checkbox: Are there gaps in the music? If "no", then it would know to prevent any gaps from occuring in the stem. If "yes", you'd be prompted to specify where in the timeline the gaps ocurr. This approach could be extended to deal with other areas of ambiguity thereby guiding it to do its job.

I wonder if such human guidance was built into the protein-structure code.


https://soundcloud.com/user-646279677
BiaB 2025 Windows
For me there’s no better place in the band than to have one leg in the harmony world and the other in the percussive. Thank you Paul Tutmarc and Leo Fender.