#Hallucination is defined as "an #experience involving the apparent #perception of something not present". Consequently, it requires a #sensory apparatus.
Because an #LLM does not have the ability to experience its surroundings except through user-provided prompts, an erroneous statement generated by the model should be called a (computational) #mistake, not a hallucination.
It cannot be a #lie because there is no #intention involved in the generation of the statement.
How do you define #existence?
Do the thoughts we obviously have in our heads exist even if no one else can attest to that?