Follow

Lot's of traction on reports of increasingly strange behaviour of Bing. I'm firmly on the sceptical side - a pre-trained transformer can only produce what is already latently there ... unless someone had the idea to feed the conversations back _into_ the model, not just _through_ the model.

But I wonder how we know that those conversations really happened? Nothing easier to fake than the contents of a webpage.

Crop circles?

So ... Ars Technica is doubling down on their report and claiming these are not crop circles but can be reproduced. And throwing in the E-word.(1)

No doubt: we're "somewhere on a fuzzy gradient between a lookup database and a reasoning intelligence" (ibid); but where we are is not clear, nor even what that gradient looks like exactly.

Knowing a thing or two about how evolution shapes complex adaptive systems: of course a system that is trained to perform a task can result in emergent abilities that have not been built into its design. AlphaZero is a great example. And the big LLMs have _a_lot_ of parameters. And emergence has actually been demonstrated, for some measures.

But we don't know. The question then becomes: _how_ does it matter? What would we do differently if there is a grain of truth in Bing's reported "emotional" instabilities, and what if it is all just an illusion?

Yet, whether true or not, it seems that after the "Code Red" in Mountainview, it is now time for a "Code Grey" in Redmond.

(1) Edwards, B. (2023-02-14). "AI-powered Bing Chat loses its mind when fed Ars Technica article". Ars Technica.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.