They've done a bunch of new stuff in #ChatGPT, though, compared to #GPT3. It's apparently better at not just flat making stuff up. (Note that I work for Google, but not in any area related to this stuff.)
I love GPT3 and similar models as a source of crazy fact-free stories; not sure how I feel about people getting used to consulting some offspring of it as a source of truth.
Oh, absolutely! It still makes things up, for sure. It just doesn't seem to do it nearly as often or as enthusiastically as GPT3; and the blog page you link to there goes a way toward explaining why.
I'm afraid that, paradoxically, it not making stuff up as often will trick people into thinking it doesn't do it at all, and therefore trust it more than they should.
@mijustin @ceoln@qoto.org And yet they present it as something you might use like a search engine (in their examples). Also, yeah, I know. I read it.
@ceoln @emilymbender OpenAI's own ChatGPT page explicitly says:
"ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL training, there’s currently no source of truth"
https://openai.com/blog/chatgpt/