Care to give me your definition of "intelligence"?
Sure - "awareness" is a key element. There are many useful behaviors that have utility that don't necessarily demonstrate intelligence.
For example, my cat Nigel will perches on the edge of the litter box and poops onto the mat
outside the box. He'll then paw some of the kitty litter into a pile, and leave the box.
He's clearly got
some awareness of what's happening, but it's obviously more instinctual than intelligent.
Similarly, neural networks are trained to be able to solve
specific problems in
specific ways. Vision processing reduces the problem space down to pixels, while text reduces the problem space to words. Each requires a specific architecture, and is custom designed to work in the problem domain.
This is to some degree modeled after what designers of neural networks have found in nature. For example, there's a
lot of processing that takes place in the eye before the information is passed onto the brain. And it's not simply a one-way trip - there's a lot of interaction between the layers.
But despite all this useful activity taking place - by
my definition - the eye is not "aware".
Programs like ChatGPT can be trained to manipulate information, but they have no "awareness" of what that information is.
People used to thing that if a computer could play chess, it would then be artificially intelligent. There's an embedded assumption here: something must have intelligence to play chess.
But it turns out that you can create a computer program capable of playing chess
without there being any artificial intelligence involved.
Or consider Arthur C. Clarke's Third Law:
“Any sufficiently advanced technology is indistinguishable from magic.”"Sufficiently advanced technology" is not the
same as magic, it's just "indistinguishable" from magic.
Similarly, you can have a program capable of passing the Turing test, but that doesn't mean it's artificially "intelligent", merely that
it can pass the Turning test. I have a fairly solid understanding of how neural networks work. They are neither intelligent nor creative - their works are only derivative.
That's not to say that these systems aren't useful. But consider if you'd written a program to discover all the prime numbers between 1 and
n. It's tedious work, but fairly easy to implement. The resulting system would be a useful tool for performing a task, but it's got no
understanding of what it's doing. It's merely running an algorithm.
Similarly, AI tools are useful in performing much more complex work, but although they are much more sophisticated, they are still no more "intelligent" than a program that lists prime numbers.
I'll grant that my definition of "intelligence" is limited to
my personal experience of intelligence. I'll also grant that I don't
really know how my mind actually works, or if these mental models are just illusions.
On the other hand, simply because tools are useful and allow me to do things I couldn't do without them doesn't mean I'm willing to consider them partners in the work. The idea of granting a program co-ownership of a Nobel Peace Prize seems to me a fundamental misunderstanding of the current state of AI, granting it attributes that don't yet exist.
Intelligence is a hard nut to crack, and I don't think we're anywhere near a good understanding yet. Knowing what the current AI programs
can't do helps us understand what we don't know about intelligence.
Hopefully that helped answer the question.