Meta. OpenAI. Google.

Your AI chatbot is not *hallucinating*.

It's bullshitting.

It's bullshitting, because that's what you designed it to do. You designed it to generate seemingly authoritative text "with a blatant disregard for truth and logical coherence," i.e., to bullshit.

@ct_bergstrom Disagree. They're designed to mimic what a human would write. If they end up bullshitting it's because the models aren't good enough, not because that's what they're designed to do.

@moultano Humans have an underlying knowledge model. They have beliefs about the world, and choose whether to represent those beliefs accurately or inaccurately using language.

LLMs do not have an underlying knowledge model, they don't have a concept of what is true or false in the world. They just string together words they don't "understand" in ways that are likely to seem credible.

It's not a matter of making better LLMs; it'll take a fundamentally different type of model.

Follow

@ct_bergstrom @moultano I'm starting to have doubts about the idea tha LLMs are "stochastic parrots" that can't generalize after watching a short talk from Francois Charton of Meta at NeurIPS 2022.

TL;DR - he trained a small LLM to learn how to diagonalize matrices using only triplets of the similarity transform. No hallucinations were observed.

The talk was "Leveraging Maths to Understand Transformers" neurips.cc/virtual/2022/worksh

@ct_bergstrom @moultano Ugh, not small LLM - small transformer based language model.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.