Question for the Fediverse hive mind. Is there any evidence that the #AI models hallucination rates are getting ANY better over time?
I'm wondering if a 5 to 15% hallucination rate may just be the nature of the best with LLM's and an unsolvable problem.
@tchambers They absolutely are getting better, but also yes, it is an unsolvable problem... mainly because they aren't designed to be fact machines, so we need to stop treating them like they should be.
@JustinH @tchambers Fwiw, they absolutely could be trained to respond that way, but "I don't know" doesn't fit the specs of the people buying it. That's not a restriction of the algorithm; it's a restriction of business.
@JustinH @tchambers But more importantly, answering "I don't know" first requires you to know if you know, and these things don't know that, or anything else. They're simple pattern matchers. Trying to act like they know *any* facts (even facts about what they know) is giving them more credit than they're due.