... the wild and probably bogus details aside though, I've never bought into the idea that hallucinating or BSing is an unsolvable intrinsic flaw of LLMs, since it may take not much more than operationalizing the process we humans use to construct an internally consistent world model, which is to explore a range of consequences that follow from our beliefs, spot inconsistencies, and update our world model accordingly. And that looks like something that could be attempted in well-trodden paradigms like RL or GANs or something that's not much more complex, so my bet would be that we should've largely worked it out within 4-5y.