Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(62,491 posts)
7. Yes, if you cherry-pick responses, you can convince yourself genAI models are actually thinking. Especially
Tue Apr 14, 2026, 08:22 AM
17 hrs ago

if you really, really enjoy chatting with them.

This effect on humans is called the ELIZA effect, for a chatbot developed 60 years ago:

https://en.wikipedia.org/wiki/ELIZA_effect

I don't have to spend time coding to know that many developers and software engineers consider genAI models stupid much of the time, and maddeningly inconsistent. I've seen plenty of messages from those coders online, especially on Hacker News. And that goes for every genAI model, including Claude.

I've seen entirely too many examples of chatbots offering one wrong answer after another, very confidently, and apologizing for each error and then offering another wrong answer, just as confidently.

I've also read too many stories about people being led into all sorts of mistakes after they decided the mindless software they were chatting with was really intelligent.

Which is why I've continued posting news stories and studies about how flawed and harmful this tech is.

Because the AI companies stole the world's intellectual property for training data, algorithms allow for almost instant responses copying the structure of language (and code) well enough to seem persuasive.

But the warnings AI companies always include - that their AI models make mistakes and users should always check results - aren't there because they're worrying too much and not realizing their bots are actually thinking, as you say you know AI is thinking from your personal experience. The warnings are there because those companies actually know damn well that their bots aren't truly intelligent and can hallucinate at any time, in any way, and they don't want to be held liable.

People often fall for the hype the AI companies peddle when they aren't trying to set up legal firewalls to protect them from responsibility for their products' flaws. They also fall for the bots' sycophancy.

Recommendations

1 members have recommended this reply (displayed in chronological order):

Latest Discussions»General Discussion»As someone who must use A...»Reply #7