We see that #ChatGPT can be astonishingly plausible without necessarily being accurate, factual, or “truthful.” It generates misinformation as persuasively and enthusiastically as correct information, making it difficult to distinguish truth from lies.

Is this a choice by #OpenAI, or is it inevitable with the large-language-model approach?

It’s all too appropriate for our time, as mendacity and bullshit-artistry poison our political discourse—demagogues thriving in a fact-free realm.

@JamesGleick I believe that it's inherent in the LLM approach. You probably already follow @emilymbender, who has written compellingly about the limits of LLMs.

Maybe there is scope for using LLMs as a natural language layer atop actual knowledge models, but LLMs on their own are cursed to be little more than random bullshit generators.

(Nevermind that they've been trained on text from the internet, another itself a form of random bullshit generator.)

@ct_bergstrom @JamesGleick @emilymbender One subtlety that's lost in ChapGPT discussions is that it really is a LLM that's *then* fine tuned with human knowledge through reinforcement learning. This paper from OpenAI explains the process. arxiv.org/pdf/2203.02155.pdf

So, yes, ChapGPT does generate random bullshit but not quite as much as a strict LLM would be expected to generate. Not sure if @emilymbender has addressed this.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.