Show newer

@me > what is "intelligence"?

Intelligence is the ability to 1) learn new skills and 2) pick a fitting skill from your repertoire to solve a task.

Rocks don't have this. Thermostats don't have this. Cats have a little. Humans have this. AIs starting to have it. ASIs would have it in spades.

@me > as long as those tasks are within the scope of what we, humans, normally do

This is what I'm trying to contest.

> Where I don't expect AI to succeed, at least not in its current form, is creating new knowledge ... Simply because there is no pattern to apply here, it would be "the first ever" kind of thing.

But it.. already did. New chips, new drugs, new algorithms... One can try to dismiss that as a mere brute-forcing, but I find that distasteful as the chances of finding those are astronomical.

> (a list of things that a model can't do)

That would not age well :blobcoffee:

That's really missing from your model (haha) is that the models don't work simply by unfolding prompt ad infinitum. They're in a feedback loop with reality. What they miss in executive function we complement (for now) with environment. And from what I've seen, the agents are getting closer to actually run as `while True: model.run(world)`. Just as you don't solve math with your cerebellum, the agents don't do "mere interpolation".

@me There is one, thanks for focusing on it in the reply ((=

My claim is that the model training induces meta-learning...

> That was the goal all along - even before LLMs were a thing. OpenAI and DeepMind were on the hunt for making a thing that can learn on the go and adapt. And looks like we've got this by now.

... and that makes the exact content of its pre-training corpus irrelevant. As long as it can pick up knowledge and skills on the go it is intelligent. And the notion of "interpolation" (even in an insanely high-dimensional space) is irrelevant.

Can we please collectively shut up about stochastic parrots, just regurgitating the data, following the training distribution, interpolation, etc etc?

@me I don't buy this.

SWT appears to only claim that an LLM *can* do interpolation. But even if I'm wrong here and interpolation is the only thing LLM does this doesn't matter as they are capable of systematically using learned patterns to perform in-context learning and then to produce solutions for unseen tasks. And this is a hallmark of intelligence.
Yes, novelty is hard. No, LLMs aren't just replicating old distributions.

@reidrac Get 120hz first - much more pleasant than 4K@60.

(Also, Gigabyte m32u has the whole package for a fair price.)

@ericflo npm for system-level apps is cringe though.

In theory, OpenHands could show the way by integrating with native PMs. But given their resource constraints I'd rather "first install docker" and let them focus on their product.

OTOH, OpenAI and the rest should dedicate resources to make their stuff available through the OS-native packaging.

On the other "other hand" (genai, yay!), perhaps their agents are most useful for writing JS, so npm is fine.
Why not pypi though? /s

@simonmic My dear lord, the priorities... They have that insane bug with "taking its time to search for ignore files" and yet the top of their work list is "add some nice WASM feature" 🤯

@Blerkotron I've certainly have seen "realms of unmaintainable code" long before LLMs. The "guys" were the managers and the AIs were the juniors/contractors.

The problem is with the board. The AI slopification is just another turn of the "we can do everything in ten minutes just by delegating to a sweatshop".

And perhaps with the industry - that makes running such boards profitable.

@ramin_hal9001 @das_g > The outside world is a projection, you put it there. It is not happening out there, it is happening inside your head. It is, in fact, a dream, exactly like when you fall asleep. We need to see, we need to perceive, we need to dream actively, because this is the only way we can take this huge universe and put it inside a very tiny head. We fold it, make an image, and then project it out.

@wilfredh "You are writing legacy code *right now*."

> fish: Unsupported use of '='. In fish, please use 'set ....

You know that I used.
You know what that means.
When why the hell you're pestering me with suggestions to use something else instead of just fucking doing it?

@Sicoaxial @misterdave Nah, Haskell is fine as a first language too. Even better that way if you can bear it.

- We ought to understand the world through erotic and reason.
- Did you mean logic and reason?
- No.

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.