@jsrailton
It really burns me that people are hyping and using ChatGPT and the like not understanding that "hallucinations" are not bugs, but that they only work by hallucinating: They are a model of 'what should the next word be to make something that "looks correct"' rather than "is correct", and no amount of patching is going to cause a hallucination machine to switch from "looks" to "is".

And those BEHIND the tools either know or should know this, but simply chose to ignore it.

@ncweaver @jsrailton ...frankly if people just read an article or two about how these models work , it's quite clear. For some applications (like suggesting an email or a letter) it's even necessary. Without wanting to belittle the risks...

@ErikJonker @ncweaver @jsrailton How they're being sold is important. They're being sold as magic truth machines.

Follow

@fuzzysteve @ErikJonker @ncweaver @jsrailton Where do you get those "Truth machine" ads? Everywhere I go I see "careful, this thing isn't factually correct" stickers.

@dpwiz @ErikJonker @ncweaver @jsrailton "This will confidently lie to you" isn't shown anywhere with Bing, for example. It's not about what people say. it's about what they imply when they rhapsodize about them.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.