I see people being deceived by this again and again: ChatGPT can NOT read content from URLs that you give it, but it will pretend that it can (and can be incredibly convincing when it does that)

Constantly debunking this feels like a Sisyphean task, but it's really important to spread this message any time you see anyone falling into this (very understandable) trap

simonwillison.net/2023/Mar/10/

Follow

@simon

I wonder why they didn't make more honest when it comes to its capabilities: it should respond in the tone of a modest opiniom and premise its fallibility. Instead it responds with absolute certainty and always makes it seem like everything is going as the user expects, including reading the content of a link.

It would be super simple to detect a link in the input and show a warning message: "ChatGPT can't read the contents of links". It seems to me that we are forgetting traditional algorithms and interfaces in the name of an experience that resembles that of Hollywood AIs at all costs because they convey an idea of "future".

@post @simon

Lot of "we" going on, when the basic safety tools and tips you mention could be added at near zero cost by the people selling it.

@post I pinged OpenAI on Twitter and suggested a similar fix, no idea if I'll get any traction on that though

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.