I wrote a longer-form piece over at post.news about the problems with using the verb "hallucinate" to describe AI chatbots that make things up.
Here's the link for those that what it to read it formatted there.
https://post.news/article/2Lr1Pj6ITLA0LGxLi2CQIHUI1WB
I'll serialize it here as well, below.
@ct_bergstrom
Perhaps Hanlon's razor should be applied here.
As you mentioned, "hallucinate" is an established AI term. Engineers inexperienced in public communications use the jargon when speaking to reporters and the reporters parrot it back in their headlines without understanding the context. This is not unusual - another example that comes to mind is "heritability" in biology.
So, at least for now, I'll give them the benefit of the doubt.
@ct_bergstrom FWIW, since I'm in a contentious mood this afternoon, I disagree that hallucinations are necessarily pathological. A couple of counter examples are sleep onset/awakening hallucinations or children who are convinced that they have an imaginary friend.
That doesn't mean that I'm happy about applying "hallucinations" to LLMs since it incorrectly implies that there's a consciousness in there that's misperceiving the external state of the world. And "bullshitting" also implies intent, if not to deceive, then to enhance status or impress.
Another term is required. Blathering is the best I can do but I'm hoping that someone else can do better.
@twitskeptic I like blathering. I've addressed the hallucinations-as-non-pathological thing elsewhere, but I should have noted in the story that the primary reading in our society of the term is as pathology.