If you use, expect to use, or have an opinion about using AI, you definitely need to read this. Jaw-dropping stuff.

amandaguinzburg.substack.com/p

Follow

@alfiekohn That's very interesting but to me it just confirms that when you use a piece of technology you need to be aware of its limitations. While there is a technical issue here (LLMs hallucinating), there are two issues of 1) people trusting LLMs responses without fact checking them and reviewing them (something we do with humans!) and 2) people treating LLMs as sentient beings.

Indeed, when prompted correctly, you can see that ChatGPT immediately acknowledges not being able to retrieve the full text.
If LLMs are here to stay we'd better start educating people on how to use them properly.

@nicolaromano @alfiekohn ChatGPT does not acknowledge anything. It states the conditions that need to be fulfilled to access the full text, but it does not say whether it can meet those conditions. Even if it did so, you wouldn’t know if its reply was factually accurate or another hallucination.

All LLM output is hallucination, only some hallucinations coincide with reality. Interacting with an LLM is like having a lucid dream.

@ArtHarg @alfiekohn Yes, that is exactly my point. The answer depends on the prompt. If you don't ask to check accessibility then it likely won't say anything about it. And you're right, even if you do, it might say something wrong, you cannot trust it. That is why you need to actually check the answer is factually correct and that's why in many cases using a chat LLM won't actually save you time. There are use cases for these systems but they should IMO be used as a starting point, they're nowhere near good enough to produce reliable usable output in a robust manner.

I disagree it's all hallucinations (not by the definition of hallucination, but that's semantics), most of the output of an LLM is factually correct. The problem is, how much incorrect output can be tolerated without harm? Also, there's plenty of human generated BS out there, yet we use Internet because there is a good deal of good human generated content in it. We shouldn't ban LLM, we should use them appropriately. If you want to put a nail in the wall use a hammer, if you want to crack an egg do not use a hammer.

@nicolaromano @ArtHarg @alfiekohn

So you're proposing we just wait until all humans stop making basic assumptions in communication and then AI will be safe and ethical.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.