Until I meet a LLM that is capable of responding "I don't know" when asked about something missing in its training instead of "hallucinating". The word "intelligence" has certain foundational implications that are missing here. This isn't to say it can't be made into a useful tool, just that it looks so much like intelligence that *calling* it intelligence seems intentionally misleading.
I dip my foot back in every time a new model comes along - basic factual questions - and have yet to see an "I don't know".
@Biggles GPT-4 is pretty good at saying when it doesn't know something these days - hallucinations can still slip through but they are a lot less common https://chat.openai.com/share/f75295f2-3252-4841-a532-dcd6b62418b6