#ChatGPT just answered a burning question of mine that two human pharmacists could not answer. Yipee!
https://chat.openai.com/share/08da5a60-93ea-456d-be95-467da614ff78
You are right. I know LLMs don’t optimize for truth. It was a curiosity of mine; nothing consequential.
I did confirm this with an acquaintance pharmacist of mine.
About humans knowing what they don’t know: it’s funny you say that, because at least one of the pharmacists I asked about this made all sorts of contortions and tried to explain to me the concept of volume, the idea of density, what a mg is, etc — before admitting she didn’t know what that mysterious “IU” meant 😆
LOL. Good point, knowing your limits doesn't necessarily mean admitting them. Something else for the AI developers to do. - Jaime
@tripu
I'm pleased that you got an answer, but I would point out that the pharmacists have a sense of what they do and don't know, and will volunteer that fact. ChatGPT has no such sense and probably no robust routine for admitting its limits. Correct me if I'm wrong. ;)
ChatGPT is not designed to produce correct answers, it's designed to fool you into thinking it's thinking.
Confirm its "answer" at Mayo Clinic or WebMD sites, or Wikipedia.
Best regards,
Jaime