Sure, there are things we don't understand about . We know how the underlying works, and , and all that, but the models are so complicated we can't just take them apart and look at them the way we would, say, a big database. This leads to unexpected emergent behaviors.

That reminds me a lot of my job, which boils down to modeling living systems with and code. We know the , we know the , and we can observe the , but there are a whole lot of layers in between where apparently simple processes lead to remarkably complicated results.

And? It doesn't mean we don't *understand* living systems, it just means we don't know every single thing that goes on inside them all the time. So we need to to figure out the most probable results: "If I do this, what do I expect to happen?" Then quantify our about that expectation, which is pretty important when, say, patients want to know how long they have to live.

Congratulations, ! You've joined the entire rest of the universe. In that limited sense, the idea that we "don't understand AI" is true. But it's not some unknowable permanent mystery.

On the scale of revolutions in human affairs, I'm still going with stone , controlled , and as somewhat bigger deals. On the second tier I'd put , that runs on something other than power, and including computers themselves.

I don't say it's *impossible* AI will be on the same scale eventually, but if so it won't be any more of a than the previous big technological shifts. "Our time is unique and nobody else has ever experienced any change this profound!" doesn't have a great track record.

@medigoth LLMs are complicated, but they are not complex systems, like biological life. They don’t have emergent behaviours. Producing surprising strings of text is not the same as emergence. Very large and non-deterministic is not the same as high sensitivity to initial conditions.

Follow

@8r3n7 They seem to me like they're pretty sensitive to initial conditions. Sure, they tend to converge on certain behaviors, but so do living systems. And "tend to" is not at all the same thing as "always do."

@medigoth There are still many things which make me reticent to put these artifacts into the same category as biological life. The main one is that, absent human input, they are basically inert. Which means that you have to include humans in the model, to get that level of complexity.

Many of the features of LLMs which make them appear complex already exist in similar form in simpler programs, such as Conway's Game of Life. But they are all computer programs, running on deterministic Von Neumann architectures, where the code and platform are completely separate.

Unlike life, they do not truly self-modify. Although some of these new systems, which spawn child processes, may have echoes of that, if you consider the totality. But they are not yet changing their own physical matrix. So they seem, to me, at best, only simulations of complex systems.

But it would be good to get a true expert's opinion!

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.