I wonder why they didn't make #ChatGPT more honest when it comes to its capabilities: it should respond in the tone of a modest opiniom and premise its fallibility. Instead it responds with absolute certainty and always makes it seem like everything is going as the user expects, including reading the content of a link.
It would be super simple to detect a link in the input and show a warning message: "ChatGPT can't read the contents of links". It seems to me that we are forgetting traditional algorithms and interfaces in the name of an experience that resembles that of Hollywood AIs at all costs because they convey an idea of "future".
@post I pinged OpenAI on Twitter and suggested a similar fix, no idea if I'll get any traction on that though
@post @simon
Lot of "we" going on, when the basic safety tools and tips you mention could be added at near zero cost by the people selling it.