We see that #ChatGPT can be astonishingly plausible without necessarily being accurate, factual, or “truthful.” It generates misinformation as persuasively and enthusiastically as correct information, making it difficult to distinguish truth from lies.
Is this a choice by #OpenAI, or is it inevitable with the large-language-model approach?
It’s all too appropriate for our time, as mendacity and bullshit-artistry poison our political discourse—demagogues thriving in a fact-free realm.
@ct_bergstrom @JamesGleick @emilymbender One subtlety that's lost in ChapGPT discussions is that it really is a LLM that's *then* fine tuned with human knowledge through reinforcement learning. This paper from OpenAI explains the process. https://arxiv.org/pdf/2203.02155.pdf
So, yes, ChapGPT does generate random bullshit but not quite as much as a strict LLM would be expected to generate. Not sure if @emilymbender has addressed this.
@ct_bergstrom @JamesGleick @emilymbender Summary of the RL from the paper.