Originally Posted by Janice & Bud
Would you consider this an example of somewhat older technology being relabeled AI by the business world? No problem with that but it is interesting. I have photo editing apps that are amazing in replacing clouds, removing power lines, etc., and they been around for years. But now the updates rename them AI. I’m not suggesting there is an appropriate point to call certain algorithms AI or even if it matters. Mostly nomenclature. So I’ll take it to the extreme. My 19 year old mother in 1942 left her little rural GA town for Washington DC where she worked on an IBM punch card line. AI? 😀😀
Although I have built and trained a handful of artificial neural networks, I am certainly no expert in AI.

I wouldn’t place a lot of credibility in what marketeers have to say about AI and how their product is the best until those claims are verified. Snake oil and bogus descriptions have existed since goods were first bought and sold.

I do agree that the line between true AI and something else is getting blurred which is why I don’t get too hung up on the vocabulary. For me either the tool is useful or it isn’t.

The text-completion/text-prediction tools common in online forums (such as this one) I’d say is considered AI. These tools have been trained on many examples of properly written text and can suggest a likely good next word for you based on the statistics of what it’s been trained on. From my experience they actually do a good job.

I think as the future unfolds, we will see that like many other things in life, that there will be a spectrum of AI strength that will emerge in various tools and marketplaces. The larger the model, the more capable the tool.

An observation I find quite interesting is when an LLM gets it wrong and in certain domains I’m finding this is happening frequently.

One recent example (that won’t be of much interest to many here) is that recently I was struggling to fix a problem in some Mathematica code I had written. The problem was that I was using a command called “Grid” to produce a grid of six x-y probability plots but the x-axis labelling of some of the plots were jumbled together and were unreadable. So, I presented the problem to Copilot and it understood the problem but time and time again it would offer ideas that were ineffective or just flat out wrong.

So with some offline experimentation I discovered that GraphicsGrid, not Grid, was the command that allowed the flexibility that I needed to fix the problem. I took that knowledge back to Copilot and although it was “happy” that I solved the problem, it also made clear that it could not update its knowledge to include details of GraphicsGrid. Clearly its training sets did not include sufficient content regarding GraphicsGrid.

To be sure, this gap in knowledge is being observed around the world many times each day in various domains by those that ask it penetrating questions. And this is to expected as these LLMs are still at an early stage of development.

And I guess I should be happy that only its human handlers are able to incorporate new info into its training to prevent bogus info from being injected by the public. But more importantly, allowing the thing to learn from positive and negative reinforcement in real-time may be a step towards sentience.


https://soundcloud.com/user-646279677
BiaB 2025 Windows
For me there’s no better place in the band than to have one leg in the harmony world and the other in the percussive. Thank you Paul Tutmarc and Leo Fender.