Originally Posted By: Bass Thumper
The next step may be artificial systems that can actually think, ponder, wonder and apply (or choose to not apply) moral and ethical values to a situation.

We're no doubt already there in the code that drives autonomous vehicles.

For centuries, ethicists have earned their stipends by fantasizing about railroad tracks dividing and one way there's a baby on the tracks and the other way there's an old man on the tracks, what do you do and what are you responsible for doing having done it? Now we have Teslas, and you KNOW (although I'm sure the code is hyper-confidential) that within that programming must be oodles of moral reasoning, like whether to (A) hit pedestrian and driver lives or (B) swerve into brick wall and driver dies.

Sure, you can say the car isn't applying moral reasoning, that was applied by the developers who coded it in. But then, you might say the same thing about most people and what they learn as children. Either way, it's Christine who's making up her mind where to direct the unavoidable damage.

Likewise, I have an application that generates music using cellular automata. I take what it gives me and mess with it until I like it, then I call it music. Whose music? Well, I'll take full credit because I can, but honestly, it's more like a collaboration. And it would be straining to call it a collaboration with the developers. It's a collaboration with the application.

I guess some people would choke on that and push the credit out in various directions so it only applies to humans, but that just seems doctrinaire.

Bottom line: I think uncreative people can drive creative people to create creative software, and then we get the situation of uncreative people driving creative software, with no creative people left in the immediate picture. It can get weird.