One thing to remember about #ml (and, by extension, #ai) is that it is, at the end of the day, a technique for complex function approximation. No more, no less. Think back to Stone–Weierstrass theorem from the mathematical analysis course, just on a different scale.
It is hard to imagine writing down an analytical definition for the "human speech" function, but, amazingly, we can computationally arrive at something that is behaving very similarly, and we call our latest take at it "Large Language Models". The impressive thing about this is how unimpressive it really is for what it does.
When looking through that lens, it feels kind of silly to ascribe real intelligence to such models, since it's merely an imitation of the original phenomenon. But it does provoke some reflection on what the existence of such approximation tells us about the original.
I think it also indicates the limitations of the current generation of AI techniques: they can achieve great (perhaps arbitrarily great) accuracy when interpolating, that is, when we are working within the information space well-represented in the training dataset.
However, it's much harder to make assertions about extrapolation accuracy the ideas and knowledge not seen by the model before, never mind the ideas completely novel to the humanity entirely. To me this is a hint as to why AI is actually pretty bad at creativity. It's not so much because it's bad at creativity, it's because its extrapolation is rather unlikely to match what humans consider creative.
Does this make #AI useless for any art, or novel research, or other forms of innovation? Not at all, I don't think. For one, all innovation consists of 1% of actually new ideas and 99% of hard and boring implementation/testing/experimental work, and any help with those 99% could still be a massive help. And even within 1%, random flailing of AI models can inspire humans into actually useful ideas :)
All of that it say, AI is just a better brush and it's silly to pretend it doesn't exist.
@me I don't buy this.
SWT appears to only claim that an LLM *can* do interpolation. But even if I'm wrong here and interpolation is the only thing LLM does this doesn't matter as they are capable of systematically using learned patterns to perform in-context learning and then to produce solutions for unseen tasks. And this is a hallmark of intelligence.
Yes, novelty is hard. No, LLMs aren't just replicating old distributions.
@dpwiz@qoto.org nothing you've said seems to contradict to what I've said, no? :)
The really interesting question (and the one I am not smart enough to formally answer) is in what space does it do its interpolation. My layman understanding that all the recent advancements are thanks to the fact that the new architectures are able to coax the math to learn in a higher-level space than just the examples seen. So yeah, it does apply learned patterns to examples that fit them.
Problems begin when there is no known pattern that fits the task, which is exactly what innovation and creativity usually deal with :)
@me There is one, thanks for focusing on it in the reply ((=
My claim is that the model training induces meta-learning...
> That was the goal all along - even before LLMs were a thing. OpenAI and DeepMind were on the hunt for making a thing that can learn on the go and adapt. And looks like we've got this by now.
... and that makes the exact content of its pre-training corpus irrelevant. As long as it can pick up knowledge and skills on the go it is intelligent. And the notion of "interpolation" (even in an insanely high-dimensional space) is irrelevant.
Can we please collectively shut up about stochastic parrots, just regurgitating the data, following the training distribution, interpolation, etc etc?
@me > as long as those tasks are within the scope of what we, humans, normally do
This is what I'm trying to contest.
> Where I don't expect AI to succeed, at least not in its current form, is creating new knowledge ... Simply because there is no pattern to apply here, it would be "the first ever" kind of thing.
But it.. already did. New chips, new drugs, new algorithms... One can try to dismiss that as a mere brute-forcing, but I find that distasteful as the chances of finding those are astronomical.
> (a list of things that a model can't do)
That would not age well
That's really missing from your model (haha) is that the models don't work simply by unfolding prompt ad infinitum. They're in a feedback loop with reality. What they miss in executive function we complement (for now) with environment. And from what I've seen, the agents are getting closer to actually run as `while True: model.run(world)`. Just as you don't solve math with your cerebellum, the agents don't do "mere interpolation".
@me Polluting the feeds is what we're here for 🥂
That, and the thinking ofc.
That's really missing from your model (haha) is that the models don't work simply by unfolding prompt ad infinitum. They're in a feedback loop with reality.
Hard to argue with that. I am aware that agents are a thing, but, quite honestly, I don't understand them well enough to have a useful opinion. From the first principles it does seem like having a feedback loop from the universe us a very useful advantage we, humans, rely on in our quest for knowledge, so it makes sense that granting it to AI agents would produce something noteworthy. But that's about all I can say for now.
Well, that and that online learning seems like an underexplored technique in relation to LLMs.
@me The feedback loop is important as it the thing that makes the multi-pass iterative improvement possible. An LLM-like model is a closed system and sure, I'll grant that it will bounce around the middle of its prob landscape.
But giving it at least a scratchpad *allows* it to leverage the more powerful and abstract higher-level patterns it learned. And *this* has no limits on novelty, like being turing-complete elevates the system from a level of a thermostat to all the complexity you can eat.
Of course "allows" does not guarantee it would be used effectively. But at least it liberates the system from the tyranny of the "mere interpolation".
@dpwiz@qoto.org
This is what I'm trying to contest.
Noted. Even if we don't agree in the end, a discussion is a learning tool.
But it.. already did. New chips, new drugs, new algorithms... One can try to dismiss that as a mere brute-forcing, but I find that distasteful as the chances of finding those are astronomical.
To my knowledge, in all those examples humans were heavily involved. It's not like someone fed Knuth's books into an LLM, and it suddenly started raining brand new algorithms around. Even DeepMind's AlphaZero and friends don't just do magic by itself, but rather the humans put a lot of effort into creating an environment that makes solving a particular task possible. I wouldn't call it brute force, more like guided computation.
Machine learning has been a useful tool in all sorts of scientific fields for decades, so it only makes sense for the tool to get sharper over time.
That would not age well :blobcoffee:
I mean... If I were making predictions, it wouldn't, but I am simply describing what I see today ;)
I feel like you are under a mistaken impression that I am trying to put AI into one neat pidgin hole, once and for all, thereby defining its boundaries forever. Which I'm not. I wouldn't dare extrapolating my already limited knowledge in this area into the unpredictable future (see what I did there?).
What I am really trying to do is to make sense of what different flavors of AI really are today, before I even bother doing any predictions. I am exploring different mental models, thinking out loud, and this thread reflects one of them. Judging by your reaction, it's not a great one, or at least a controversial one. But that's fine, I'll just keep on thinking and polluting your feed with the results :P