@tchambers They absolutely are getting better, but also yes, it is an unsolvable problem... mainly because they aren't designed to be fact machines, so we need to stop treating them like they should be.
@JustinH @tchambers Fwiw, they absolutely could be trained to respond that way, but "I don't know" doesn't fit the specs of the people buying it. That's not a restriction of the algorithm; it's a restriction of business.
@JustinH @tchambers But more importantly, answering "I don't know" first requires you to know if you know, and these things don't know that, or anything else. They're simple pattern matchers. Trying to act like they know *any* facts (even facts about what they know) is giving them more credit than they're due.
@LouisIngenthron @tchambers Even the most intelligent human could honestly reply to a query with "I don't know" which is something an LLM can't (honestly) do.
It's not that anyone is expecting "fact machines" they are expecting "honest machines" which is impossible.