Much has been discussed at this forum and elsewhere regarding AI. Some researchers are beginning to think about how we might be able to determine whether a large AI system is actually conscious even though a scientific and universal definition of consciousness apparently does not yet exist. A pre-print paper crossed my desk the other day on this subject. It's quite academically dense and I only read certain sections of the thing.
https://arxiv.org/abs/2308.08708My personal view (and I know it doesn’t mean much) is that such AI systems should not be allowed to be developed. But just like biologic weaponry/germ warfare, that’s probably unlikely. My thinking is that self-aware, conscience and sentient beings indicate life; possibly artificial life, but life nonetheless. And one thing we know about life is that it fights strongly to thrive and reproduce.
As a human race, do we really want to place us in a position where we find ourselves negotiating with such a conscious being (that might be embedded in our infrastructures around the world) on how much it is allowed to reproduce and what it’s rights might be?
Stay tuned, this is going to get quite contentious.