So far, FWIW, the consensus seems to be that the technology simply isn't there yet.
No doubt, to do this right is a herculean challenge. Imagine a song with a heavy kick drum, low-end piano and keyboard, maybe some tuba or trombone plus bass guitar; all hovering around the same frequency band. It really is asking a lot for an AI code to find and isolate the bass guitar "needle" in a giant low-end and time-changing "haystack".
That said, I believe it's just a matter of time before it's done and done well. The basis of my optimism is how LLMs have taken the computer community by storm; truly amazing, especially in medical applications.
I'm guessing that this will require a minimum 2-step process. Step #1 to isolate (as best it can) the bass instruments, in part by it's frequency content to produce a "pre-stem". Then Step #2 (the hard part) is to take that pre-stem and apply a musical knowledge to remove everything that isn't bass guitar. What remains would be the desired bass stem.
I believe that David Cuny first said that PGMusic would be well-positioned to do this kind of work because BiaB could be used to create training data sets for such an AI program.. Each data set would be made up of "the problem" (a song containing a bunch of overlapping low-frequency instruments) and "the solution" (the crystal-clear bass track). But you couldn't do this manually. You'd need to create a program to automatically create these sets by the thousands if not millions across a handful of instrument lists, genres, keys and tempos.
Thanks for mentioning Spectralayers, I added it to the list above.