Sadly, and in the end, I think the artists will lose; here's my logic.

Just because a music-generating AI was not trained on an artist's songs does not mean that the AI is incapable of producing music very close to that of that artist . . . and "close" means close enough to be commercially successful in the marketplace. The technology might not be fully developed yet but it will be.

This is because even the developers of today's large AI models do not and cannot fully understand them. There are simply too many interactions between the layers and connections to be grasped by a human. In part, this is why AI models can "hallucinate". This attribute of hallucination (or unpredictableness) is being exploited in fields such as drug discovery where you are looking for unknown, unexpected and useful connections between molecules so that the chemistry of a novel drug compound that solves a particular problem is derived.

Let's say you train an AI on data that is restricted to the works by The Eagles, Tom Petty, Chicago, CSNY, SuperTramp and The Beatles. I claim that it is more than just theoretically possible for this AI to produce songs very close in sound and style to that of the Doobie Brothers.

So imho, the copyright laws will need to be restrictive enough to say "no matter what training data was used, the output of the AI cannot sound like copyrighted work". I don't know if that is even possible. Who decides the definition of "sounds like"?


https://soundcloud.com/user-646279677
BiaB 2025 Windows
For me there’s no better place in the band than to have one leg in the harmony world and the other in the percussive. Thank you Paul Tutmarc and Leo Fender.