Random #AI observation: I think the commonly used phrases "struggles with facts" or "hallucinates" are poor descriptions of LLM behaviour.
They both feed the hype by anthropomorphizing LLMs without justification.
I think it far more accurate to describe LLMs as "non-reality-based", as there is no concept of truth or fact in their construction.