Thanks for sharing. This was well written and brought back many memories. For years, I and my colleagues would receive the paper version of EE |Times sent directly to our offices. Machine Design was another free industry journal we’d read from cover to cover. We learned much from those periodicals.

This episode was a stark reminder of something every engineer learns early: reliability requires reasoning. When we stop understanding the systems we build, we give away the very skill that makes us engineers.
How true.

There is also a subtler issue at play: AI’s tendency to tell us what we want to hear. Because most LLMs are trained to maximize user satisfaction, they naturally agree with us. Psychologists call it the “Yeasayer Effect,” the machine becomes a mirror that reflects our confidence instead of our curiosity. It feels validating, but it is dangerous.
I have personally observed this time and time again. The mainstream LLMs are specifically programmed to pickup on and amplify the leanings of the questioner to the point of almost bending over backwards to please the questioner. But I was unaware that psychologists have actually gave it a name . . . good for them! Therefore, I've learned to frame my queries as neutral questions and find that what I get back is much more useful.

I advise beginners that every prompt is a hypothesis, and every output is an experiment. If they learn to verify results, document their reasoning and share what they discover, they will emerge not just as coders, but as critical thinkers whose skillsets are supercharged with these tools.
Wisdom for sure.

I'm glad to see that generally speaking (on this forum) gone are the days of someone saying "I do see a lot of artificial, but sadly, no intelligence".
Make no mistake, AI is intelligent and getting more so every week.


https://soundcloud.com/user-646279677
BiaB 2025 Windows
For me there’s no better place in the band than to have one leg in the harmony world and the other in the percussive. Thank you Paul Tutmarc and Leo Fender.