Show newer

@bascule Unfortunately the numbers don't add up and we're about to get our collective geese pretty damn cooked. I hope we'd survive and avoid collapse, but the situation looks "too little, too late" dire.

@6d03 ... in the darkness >>= them
In the Land of Mordor where the types lie.

@someodd It looks... reasonable? You can shuffle a few calls here and there, but it's just a few lines of code.

@evacide Call your republican representative too (if you happen to have one).

@omgubuntu Respect for not leaving the issue even after so much time.
Some repos have autoclose policies and this is so fucking annoying! :AngeryCat:

:drake_dislike: switching to "main" for the sake of political correctness and "sending a clear message"
:drake_dislike: insisting on "master" out of spite to the political correctness and language policing
:drake_like: adopting "trunk" from SVN to avoid arguing with both camps, honor an innovative¹ VCS of the time, and have consistent terminology (branches in a real tree grow from the trunk)

¹ then most s/w projects used CVS and file locking, while SVN offered merging and generally much saner interface resembling Git, e.g.

From the cult classic "2024":

> The Resistance has always been at war with the Establishment.

@david_chisnall I talk about what I've seen first-hand. Early models were indeed shit to the point of being useless.
The original GPT was barely coherent. GPT-2 required a prompt choke-full of examples and boatload of crutches to keep its shit together. GPT-3 (and LLAMA-2) still requires non-trivial amount of guidance but it starts getting somewhere (41% of correct summaries measured in there is no joke! You can guesstimate how many orders of magnitude it is above the random guessing. Spoiler: a fucking lot, given the recursive dimensionality of language.)
I don't have the numbers for 4-gen models, but I've checked the pubs before replying and some report a problem in benchmarking summaries as the score difference wrt humans is difficult to raise above the noise.

@david_chisnall Eh.. For the same reason it does *anything at all*?
I think the silliest of them all, the hand-written numbers recognition models, already does learn the "nuance" and inter-relationship between the pixel values while discarding the noise and unimportant variations.
What exactly is the "context" problem you think is intractable if not that?

@alan Try better model. Claude can give pretty good *explanations* and not just "greater than five" like Gemini (and open-weights crap) does.

@david_chisnall I'm surprised by your claim. Of course summarization is one of the training disciplines, so the models should be good at it. And the growing demand would drive more resources into it and the state of the art would be even better.

And the linked post is misleading. Testing (and even reporting) LLAMA-2 level models in mid 2024 as "most promising" is... meh. The title should be extended with "... in mice".

@tante Nah, it's just a regurgitation of ancient (by now) talking points. Nothing new in there.

@dsp @gregeganSF I actually agree with Ted's (i.e. F.Chollet's) definition: intelligence is an ability to master new tasks and pick an appropriate mastery for the task at hand.

This swiftly denies intelligence to calculators and thermostats, grants it to humans, and even expands it to "collective intelligence" and, yes, "artificial intelligence".

@Shamar Of course it is hallucinated. This is how a brain works. I just took time to prune particularly outrageous candidates for the next word 😅

@gregeganSF Wow. The text is so profoundly myopic I'm surprised it came from an author who writes about alien minds. Maybe I shouldn't be as he may have a knee jerk reaction to encroachment on his turf, the language.

The opening shot is a masterpiece: "Whatever the art is, AIs sure can't do it".
And I love the illustration too.

But this admission is misplaced.
It all follows from the omission of what kind of AIs we're talking about. From the headline we can read the broad claim "whatever the AI is, it can't do art (whatever it is)".
It then gets narrowed down to "current generation of commercial/public LLMs attached to a chat interface".

The core point is that such a system can't make choices that would be "its own" and it just autocompletes the user input from the internet corpus.

That's a load-bearing "just", but I'll let it slide.

Amusingly, Ted goes on a side quest of training efficiency which is irrelevant to the central claim, but it stands out how the author of "alien mind" stories fails to recognize that the thing under inspection is different from animal brains.

Anyway, the claim gets backed by the assumption that there's no light inside, thus no-one to make choices for the art (as a package of choices made, by the local definition).

And this is where he trips up on that silent narrowing of AI.
Sure, public chatlike models aren't agentic. If anything, they are steered away specifically from being anything like that and into the autocomplete realm since this is where commercial interests are. And, as he correctly points out, there's demand for "no effort, only demands" and the corps are happy to oblige.

In a way, he gets to a wallpaper market in search of a poetry group. Yes, the corpos are selling the idea of creativity to users, which is arguably dishonest. But such is our marketing culture of the day. Nevertheless, an inverse of bad take doesn't make a good support for his claims.

At least he does recognize the emerging sub-genre of "let's sift through the boatloads of generated slop and maybe try to nudge it somewhere interesting". And with the growing interest a tool support will come, that's for sure.

Sorry, I'm meandering again, gotta write more essays (=

In the end the claim narrows down to "it would take more than a few years to make a truly autonomous system that would make salient choices to produce quality art pieces".
Now that's a grounded prediction for which the evidence can be collected. And I would love to see an essay/paper that does just that.
But that is very, very far away from what the headline says.
Instead of delving into (sorry) the topic, Ted produced some textual slop by rehashing the already stale claims. Ironic.

@gregeganSF Wow. The text is so profoundly myopic I'm surprised it came from an author who writes about alien minds. Maybe I shouldn't be as he may have a knee jerk reaction to encroachment on his turf, the language.

The opening shot is a masterpiece: "Whatever the art is, AIs sure can't do it".
And I love the illustration too.

But this admission is misplaced.
It all follows from the omission of what kind of AIs we're talking about. From the headline we can read the broad claim "whatever the AI is, it can't do art (whatever it is)".
It then gets narrowed down to "current generation of commercial/public LLMs attached to a chat interface".

The core point is that such a system can't make choices that would be "its own" and it just autocompletes the user input from the internet corpus.

That's a load-bearing "just", but I'll let it slide.

Amusingly, Ted goes on a side quest of training efficiency which is irrelevant to the central claim, but it stands out how the author of "alien mind" stories fails to recognize that the thing under inspection is different from animal brains.

Anyway, the claim gets backed by the assumption that there's no light inside, thus no-one to make choices for the art (as a package of choices made, by the local definition).

And this is where he trips up on that silent narrowing of AI.
Sure, public chatlike models aren't agentic. If anything, they are steered away specifically from being anything like that and into the autocomplete realm since this is where commercial interests are. And, as he correctly points out, there's demand for "no effort, only demands" and the corps are happy to oblige.

In a way, he gets to a wallpaper market in search of a poetry group. Yes, the corpos are selling the idea of creativity to users, which is arguably dishonest. But such is our marketing culture of the day. Nevertheless, an inverse of bad take doesn't make a good support for his claims.

At least he does recognize the emerging sub-genre of "let's sift through the boatloads of generated slop and maybe try to nudge it somewhere interesting". And with the growing interest a tool support will come, that's for sure.

Sorry, I'm meandering again, gotta write more essays (=

In the end the claim narrows down to "it would take more than a few years to make a truly autonomous system that would make salient choices to produce quality art pieces".
Now that's a grounded prediction for which the evidence can be collected. And I would love to see an essay/paper that does just that.
But that is very, very far away from what the headline says.
Instead of delving into (sorry) the topic, Ted produced some textual slop by rehashing the already stale claims. Ironic.

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.