Whatever improvements ChatGPT o1-preview may have to offer, they've introduced a whole new layer of bullshit with the "status messages" that make up nonsensical tasks that it pretends to be doing.

This matters—a lot—because this particular bullshit is not a limitation of the system.

This is an active choice by OpenAI to mislead people about what the system is doing. It’s Wizard of Oz shit.

The fact that they use their own bullshit generator to do it is icing on the irony cake.

(The stats question is from a forthcoming book by Gary Smith at Pomona.)

To be clear: I'm not saying this is just fake "loading" screen. Each phrase corresponds in some sense to something the model is doing, and we can get more information about that (see below).

My problem is that these phrases are not accurate representations of what the system is doing.

Look at the language. Over and over again, imputes a sort of cognitive agency that LLMs simply don't have.

"I’m piecing together"

"I’m noting"

"I’m thinking through"

"I noticed that"

"Hmm, I’m thinking about"

LLM's don't note, think, or notice.

There's an even bigger problem here, which is that at best these are summaries of output stages that are chained together in the model, NOT descriptions of the processes that generated those output stages.

OpenAI should be well aware that LLMs are not able to accurately report *why* they did something. They are only able to make up post-hoc rationalizations based on their context window including their output.

So these are at best guesses, not true descriptions of motivations or processes.

Follow

@ct_bergstrom But even "post-hoc rationalizations" supposes reasoning, when really, they're producing more plausible-sounding text when asked to explain their plausible-sounding text.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.