I studied Artificial Intelligence for four years, and I am not touching LLM AIs with a ten-foot pole.
It's not really about the insane electricity demands, the water usage, tho that's a good reason. It's not even, if I'm honest, about the disastrous effect on the sum of all human art and knowledge.
It's because a) I've studied enough AI to know it's a trick, a sort of linguistic illusion, and b) I've studied enough everything else to understand that I'm not immune to such illusions.
@Tattie I often use this when trying to explain LLMs to people
@michaelgemar @cjust @ianturton @Tattie You forgot the word "annoying" when describing the "podcast hosts"...
Anyway, the concept that an LLM thinks or it imitates human thinking mostly comes from marketing. For example, AI companies have decided to show LLM output in a chat-like interface because that increases adoption and trust, whether that's granted or not. There's a lot of literature on this, for example
Portraying Large Language Models as Machines, Tools, or Companions Affects What Mental Capacities Humans Attribute to Them
https://dl.acm.org/doi/abs/10.1145/3706599.3719710
The effects of human-like social cues on social responses towards text-based conversational agents—a meta-analysis
https://www.nature.com/articles/s41599-025-05618-w
The benefits and dangers of anthropomorphic conversational agents
https://www.pnas.org/doi/10.1073/pnas.2415898122
"When users cannot tell the difference between human interlocutors and AI systems, threats emerge of deception, manipulation, and disinformation at scale."