We see that #ChatGPT can be astonishingly plausible without necessarily being accurate, factual, or “truthful.” It generates misinformation as persuasively and enthusiastically as correct information, making it difficult to distinguish truth from lies.
Is this a choice by #OpenAI, or is it inevitable with the large-language-model approach?
It’s all too appropriate for our time, as mendacity and bullshit-artistry poison our political discourse—demagogues thriving in a fact-free realm.
@JamesGleick I believe that it's inherent in the LLM approach. You probably already follow @emilymbender, who has written compellingly about the limits of LLMs.
Maybe there is scope for using LLMs as a natural language layer atop actual knowledge models, but LLMs on their own are cursed to be little more than random bullshit generators.
(Nevermind that they've been trained on text from the internet, another itself a form of random bullshit generator.)