One thing to remember about #ml (and, by extension, #ai) is that it is, at the end of the day, a technique for complex function approximation. No more, no less. Think back to Stone–Weierstrass theorem from the mathematical analysis course, just on a different scale.

It is hard to imagine writing down an analytical definition for the "human speech" function, but, amazingly, we can computationally arrive at something that is behaving very similarly, and we call our latest take at it "Large Language Models". The impressive thing about this is how unimpressive it really is for what it does.

When looking through that lens, it feels kind of silly to ascribe real intelligence to such models, since it's merely an imitation of the original phenomenon. But it does provoke some reflection on what the existence of such approximation tells us about the original.

I think it also indicates the limitations of the current generation of AI techniques: they can achieve great (perhaps arbitrarily great) accuracy when interpolating, that is, when we are working within the information space well-represented in the training dataset.

However, it's much harder to make assertions about extrapolation accuracy the ideas and knowledge not seen by the model before, never mind the ideas completely novel to the humanity entirely. To me this is a hint as to why AI is actually pretty bad at creativity. It's not so much because it's bad at creativity, it's because its extrapolation is rather unlikely to match what humans consider creative.

Does this make #AI useless for any art, or novel research, or other forms of innovation? Not at all, I don't think. For one, all innovation consists of 1% of actually new ideas and 99% of hard and boring implementation/testing/experimental work, and any help with those 99% could still be a massive help. And even within 1%, random flailing of AI models can inspire humans into actually useful ideas :)

All of that it say, AI is just a better brush and it's silly to pretend it doesn't exist.

Follow

@me I don't buy this.

SWT appears to only claim that an LLM *can* do interpolation. But even if I'm wrong here and interpolation is the only thing LLM does this doesn't matter as they are capable of systematically using learned patterns to perform in-context learning and then to produce solutions for unseen tasks. And this is a hallmark of intelligence.
Yes, novelty is hard. No, LLMs aren't just replicating old distributions.

@dpwiz@qoto.org nothing you've said seems to contradict to what I've said, no? :)

The really interesting question (and the one I am not smart enough to formally answer) is in what space does it do its interpolation. My layman understanding that all the recent advancements are thanks to the fact that the new architectures are able to coax the math to learn in a higher-level space than just the examples seen. So yeah, it does apply learned patterns to examples that fit them.

Problems begin when there is no known pattern that fits the task, which is exactly what innovation and creativity usually deal with :)

@me There is one, thanks for focusing on it in the reply ((=

My claim is that the model training induces meta-learning...

> That was the goal all along - even before LLMs were a thing. OpenAI and DeepMind were on the hunt for making a thing that can learn on the go and adapt. And looks like we've got this by now.

... and that makes the exact content of its pre-training corpus irrelevant. As long as it can pick up knowledge and skills on the go it is intelligent. And the notion of "interpolation" (even in an insanely high-dimensional space) is irrelevant.

Can we please collectively shut up about stochastic parrots, just regurgitating the data, following the training distribution, interpolation, etc etc?

@dpwiz@qoto.org I think we are talking past each other a bit.

Any machine learning model is, by construction, an approximation of some other function. This isn't a moral judgement, condemnation or dismissal of what it can achieve. In fact, it's pretty darn amazing what it can achieve without us, humans, not even being able to properly define the function it is learning (what is "intelligence"?).

I even agree that, from what I personally observe, it does seem to construct some sort of knowledge about the world from the data it gets to train on. That's kind of cool on its own, but it also tells us something what "knowledge" actually is, in a way we can dissect and study. Before LLMs, it was entirely not obvious that "knowledge" can really have mathematical representation that is compact enough for us to play with.

So when I talk about interpolation, I am talking about LLM's ability to apply that knowledge to a variety of tasks, as long as those tasks are within the scope of what we, humans, normally do. Which, again, is not a dismissal. Sadly, very few people in the world get to do new things, and even those who do only spend a small fraction of time doing it. Most of the rest of their time is dedicated to the boring chores that are necessary to do the fun bits.

Where I don't expect AI to succeed, at least not in its current form, is creating new knowledge (which is different from extracting existing knowledge). LLM is not going to deliver us cold fusion. LLM is not going to terraform Mars for us. LLM won't merge relativity and quantum physics into a single, unified theory of everything. LLM is not going to solve world hunger. It won't even find a way to make Republicans best buddies with Democrats. Simply because there is no pattern to apply here, it would be "the first ever" kind of thing. This is what I, personally, define as creativity and innovation. This is the area where extrapolation is required.

It's not a given that humans will succeed in any of these particular tasks, which, again, is essential to innovation. But even if LLMs can't do any of those things, they could help us get there faster by dealing with more of the chores. So I'll take it any day.

And if one day we will be able to train a model that somehow infers to principles of the universe on such a deep level so that we can query it and get answers to all of the questions... That would be cool, although I feel like we'd need a compute the size of the universe to compute it :)

@me > as long as those tasks are within the scope of what we, humans, normally do

This is what I'm trying to contest.

> Where I don't expect AI to succeed, at least not in its current form, is creating new knowledge ... Simply because there is no pattern to apply here, it would be "the first ever" kind of thing.

But it.. already did. New chips, new drugs, new algorithms... One can try to dismiss that as a mere brute-forcing, but I find that distasteful as the chances of finding those are astronomical.

> (a list of things that a model can't do)

That would not age well :blobcoffee:

That's really missing from your model (haha) is that the models don't work simply by unfolding prompt ad infinitum. They're in a feedback loop with reality. What they miss in executive function we complement (for now) with environment. And from what I've seen, the agents are getting closer to actually run as `while True: model.run(world)`. Just as you don't solve math with your cerebellum, the agents don't do "mere interpolation".

@dpwiz@qoto.org

This is what I'm trying to contest.

Noted. Even if we don't agree in the end, a discussion is a learning tool.

But it.. already did. New chips, new drugs, new algorithms... One can try to dismiss that as a mere brute-forcing, but I find that distasteful as the chances of finding those are astronomical.

To my knowledge, in all those examples humans were heavily involved. It's not like someone fed Knuth's books into an LLM, and it suddenly started raining brand new algorithms around. Even DeepMind's AlphaZero and friends don't just do magic by itself, but rather the humans put a lot of effort into creating an environment that makes solving a particular task possible. I wouldn't call it brute force, more like guided computation.

Machine learning has been a useful tool in all sorts of scientific fields for decades, so it only makes sense for the tool to get sharper over time.

That would not age well :blobcoffee:

I mean... If I were making predictions, it wouldn't, but I am simply describing what I see today ;)

I feel like you are under a mistaken impression that I am trying to put AI into one neat pidgin hole, once and for all, thereby defining its boundaries forever. Which I'm not. I wouldn't dare extrapolating my already limited knowledge in this area into the unpredictable future (see what I did there?).

What I am really trying to do is to make sense of what different flavors of AI really are today, before I even bother doing any predictions. I am exploring different mental models, thinking out loud, and this thread reflects one of them. Judging by your reaction, it's not a great one, or at least a controversial one. But that's fine, I'll just keep on thinking and polluting your feed with the results :P

@me Polluting the feeds is what we're here for 🥂
That, and the thinking ofc.

@dpwiz@qoto.org

That's really missing from your model (haha) is that the models don't work simply by unfolding prompt ad infinitum. They're in a feedback loop with reality.

Hard to argue with that. I am aware that agents are a thing, but, quite honestly, I don't understand them well enough to have a useful opinion. From the first principles it does seem like having a feedback loop from the universe us a very useful advantage we, humans, rely on in our quest for knowledge, so it makes sense that granting it to AI agents would produce something noteworthy. But that's about all I can say for now.

Well, that and that online learning seems like an underexplored technique in relation to LLMs.

@me The feedback loop is important as it the thing that makes the multi-pass iterative improvement possible. An LLM-like model is a closed system and sure, I'll grant that it will bounce around the middle of its prob landscape.
But giving it at least a scratchpad *allows* it to leverage the more powerful and abstract higher-level patterns it learned. And *this* has no limits on novelty, like being turing-complete elevates the system from a level of a thermostat to all the complexity you can eat.

Of course "allows" does not guarantee it would be used effectively. But at least it liberates the system from the tyranny of the "mere interpolation".

@me > what is "intelligence"?

Intelligence is the ability to 1) learn new skills and 2) pick a fitting skill from your repertoire to solve a task.

Rocks don't have this. Thermostats don't have this. Cats have a little. Humans have this. AIs starting to have it. ASIs would have it in spades.

@dpwiz@qoto.org but, in the end of the day, I'm just a random guy on the internet without any particular qualifications to talk about AI, besides the fact that I've been hanging around people who do have such qualifications and picked some stuff up along the way.

So, ignoring my opinions as uneducated ones is perfectly legitimate.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.