So ... Ars Technica is doubling down on their report and claiming these are not crop circles but can be reproduced. And throwing in the E-word.(1)
No doubt: we're "somewhere on a fuzzy gradient between a lookup database and a reasoning intelligence" (ibid); but where we are is not clear, nor even what that gradient looks like exactly.
Knowing a thing or two about how evolution shapes complex adaptive systems: of course a system that is trained to perform a task can result in emergent abilities that have not been built into its design. AlphaZero is a great example. And the big LLMs have _a_lot_ of parameters. And emergence has actually been demonstrated, for some measures.
But we don't know. The question then becomes: _how_ does it matter? What would we do differently if there is a grain of truth in Bing's reported "emotional" instabilities, and what if it is all just an illusion?
Yet, whether true or not, it seems that after the "Code Red" in Mountainview, it is now time for a "Code Grey" in Redmond.
(1) Edwards, B. (2023-02-14). "AI-powered Bing Chat loses its mind when fed Ars Technica article". Ars Technica.