"Why ChatGPT should be considered a malevolent AI – and be destroyed"
Great headline :) As it often does, #ChatGPT flat made up incorrect stuff when asked about the writer; in this case, that he was dead.
And what's this BS about "frameworks"? The story says that "According to Jon Neiditz, a lawyer with an interest in #AI ethics, ChatGPT was trained under the following frameworks:" and then lists a bunch of things that one would like an #LLM to be (like fair and ethical and "privacy by design" and so on), but which this one in fact isn't.
What does "under a framework" mean, anyway? By this evidence, it means nothing at all.
https://www.theregister.com/2023/03/02/chatgpt_considered_harmful/
@tonic
We actually use computers and AI to inquire about specific people all the time! He gives some very good examples in the article of where errors like this might be much more serious than just personal annoyance.
I think it's extremely significant that large language models are very bad at giving correct answers to a very wide variety of questions, including questions about the background of individual people.
@tonic
Well, right. :) It's bad at a lot of things that people might tend to try to use it for. That's sort of the point!
@ceoln that's your usecases maybe, but using large language models for this (not only because it's not a commensurate tool, but also because it doesn't work very well) is banal and ridiculous to me. I wont be joining the "mad at a tool" crowd over this i guess 😉