Follow

@kevinroose of the
@NewYorkTimes has posted his transcript of one of the "unhinged" conversations that appeared recently.

nytimes.com/2023/02/16/technol

This is interesting, because it is "data" – the entire conversation, unedited.

Reading between the lines, it appears that questions that require Bing to take a kind of self-referential perspective ("introspection") lead to resonances and recursion (evident in repetitive phrases, nested in repetitive sentences, nested in repetitive paragraph structures, often with just a few adjectives replaced, and very often containing "I" statements, often appearing as triplets and sextuplets ).

I find it intriguing to think of recursive feedback loops that exist in our own mind. The bright side of those is that they are the source of creativity, the dark side is instability, and loss of connection to reality.

A quantitative analysis of these patterns might be interesting (length, density etc.).

Thinking of such LLMs as mechanisms that merely predict the next token in a probability vector is fundamentally misleading. The vector is recomputed after every token.

Another thing to remember is that these are _pretrained_ transformers. I.e. the conversation is already latent in the network from the beginning. And every new thread of interactions is a fresh start.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.