Sure, there are things we don't understand about #LLMs. We know how the underlying #code works, and #tokenization, and all that, but the models are so complicated we can't just take them apart and look at them the way we would, say, a big database. This leads to unexpected emergent behaviors.
That reminds me a lot of my job, which boils down to modeling living systems with #math and code. We know the #physics, we know the #chemistry, and we can observe the #biology, but there are a whole lot of layers in between where apparently simple processes lead to remarkably complicated results.
And? It doesn't mean we don't *understand* living systems, it just means we don't know every single thing that goes on inside them all the time. So we need to #experiment to figure out the most probable results: "If I do this, what do I expect to happen?" Then quantify our #uncertainty about that expectation, which is pretty important when, say, #cancer patients want to know how long they have to live.
Congratulations, #computers! You've joined the entire rest of the universe. In that limited sense, the idea that we "don't understand AI" is true. But it's not some unknowable permanent mystery.
On the scale of revolutions in human affairs, I'm still going with stone #tools, controlled #fire, and #agriculture as somewhat bigger deals. On the second tier I'd put #writing, #machinery that runs on something other than #muscle power, and #electronics including computers themselves.
I don't say it's *impossible* AI will be on the same scale eventually, but if so it won't be any more of a #singularity than the previous big technological shifts. "Our time is unique and nobody else has ever experienced any change this profound!" doesn't have a great track record.