A useful reframing of what is commonly referred to as "hallucination" in #LLM #LargeLanguageModels: "Shameless Guesses, Not Hallucinations" from Astral Codex Ten https://www.astralcodexten.com/p/shameless-guesses-not-hallucinations
I think the "shamelessness" is part of the issue I have with current systems and maybe they should be tuned to have more "shame" (in practice, more willing to say "I don't know"), similar to how most are tuned to refuse to say offensive things.