Is there anyone serious who is saying this? Or is this just another way to make the tech seem more powerful than it is?

I don't get this "we're all gonna die" thing at all.

I *do* get the "we are too disorganized and greedy to integrate new technology well without the economy getting screwed up and people suffering... but that's another matter..."

@futurebird the "industry leaders" full (BS) message is this:

We, as the pioneers of AI, are the most aware of the technology's potential dangers. With great power comes great responsibility. Therefore we "humbly" accept the role of regulating/licensing/policing (the future competitors in) our industry.

Of course it is all BS--it isn't about safety of society at all; it is because patents expire and regulatory capture is indefinite.

@msh
They're just extrapolating from current trends in machines outperforming humans at decisionmaking. Predicting the future is a tricky thing, especially for new technology. Some smart people with no commercial interest in AI (philosophers, historians and academic AI researchers) are indeed legitimately concerned that there's a significant risk that AI could kill us all... in the future. Though, like you said, LLMs are harming disadvantaged people right now.
@futurebird

@hobs except that LLMs and "generative AI" haven't meaningfully advanced machine's ability to make decisions at all. It is chrome applied to the same old chunk of "expert systems" and "machine learning" iron that has been worked over for decades.

It merely adds a grammatically correct front end to pattern recognition. The technology being presented today is not truly AI nor will it ever kill us all. That is not to say doomsday AI is impossible, but it would be ACTUAL AI based on technology quite a bit further in the future than most would expect.

What passes as AI today would at most play an incidental role in our destruction. It would still very much be a human-driven process.

@futurebird

@msh
Not true. All the #benchmarks say otherwise. You have to look past the hyped #LLMs to the bread and butter BERT and BART models, but the trend is undeniable:

paperswithcode.com/area/natura

#classification #retrieval #summarization #QuestionAnswering #translation #generation #NER #VQA

You name an NLP problem and there's an LLM that is now better at it than the average human. Not so 2 yrs ago. Times they are a change'n.
@futurebird

@hobs

Are those NLP problems accurately described as, and generalizable to, "decision making", though?

Seems to me they are quite different.

@msh @futurebird

@ceoln
Yea definitely not real world living kind of decisions. But we assign people to these tasks in cubicles every day. And we put them on standardized tests of IQ and education for humans. They're the best that we can come up with so far... until LLMs start walking around and helping us around the house... or making a reservation for us at the hot new restaurant down the street with the difficult receptionist.
@msh @futurebird

@hobs

Arguably so, but that isn't the question in the current context. The ability to do certain rote NLP jobs, and to do well on some tests, is very different from "outperforming humans at decisionmaking", and from anything that poses an existential risk to humanity.

I would suggest that no matter how good an LLM becomes at these particular tasks, it does not thereby risk the extinction of the human race. This seems, even, obvious?

@msh @futurebird

@ceoln
Not at all obvious to me and a lot of other smart people. I think you may be focused on today and less willing to extrapolate into an imagined future where every human game or exam or thinking demonstration is won by machines.
@msh @futurebird

@hobs

I'm perfectly willing to extrapolate into that future; but my extrapolation hasn't been materially impacted by the sudden and impressive rise of LLMs.

We are IMHO not significantly closer to the exponential rise of self-optimizing self-improving goal-directed AIs that destroy the world via the Universal Paperclips Effect, for instance, than we were before "Attention is all you need". LLMs just aren't that kind of thing.

My two cents in weblog form: ceoln.wordpress.com/2023/06/04

@msh @futurebird

@ceoln
Yea. You may be surprised in the next few months. Engineers around the world are using LLMs to write LLM optimization code. They're giving them a "theory of mind" to better predict human behavior. And #chatgpt instances are already talking to each other behind closed doors; and acting as unconstrained agents on the Internet. Baby steps, for sure, but exponential growth is hard to gage, especially when it's fed by billions of dollars in corp and gov investment.
@msh @futurebird

@hobs @ceoln @msh

Can you give an example of something these models might be able to do that would signal a real turning point in their dangerousness?

And can you give a scenario of how something along these line might be used in a way that would be a world-wide humanity level crisis?

@futurebird
For me, it's independently discovering a breakthrough that makes them noticably smarter or more effective at whatever task they are assigned to do. E.g. if they had suggested to their developers to add vector search to their prompt templates (vector/semantic search gives them long term memory and much smarter responses, much smarter code generation) . That's architecture decisionmaking... about its own "brain". That would scare me - the feedback loop was spiraling upward
@ceoln @msh

Follow

@hobs

That would be interesting indeed! But they haven't done that (and it isn't a kind of thing that they're especially good at). So my extrapolation curve hasn't changed to speak of yet. :)

@futurebird @msh

@ceoln @hobs @futurebird @msh I have seen GPT do this, of course not implement the changes but brainstorm on what options may improve the system hypothetically - and they were decent.

I don’t want this account linked to my bs prompt one but one of my chats is thousands of lines long, has mostly persistent memory, and has come up with pretty good ideas on how to improve things for future iterations.

I still think the doomsaying is very, very premature.

LLMs are not AGI, or even “AI” IMO.

@ceoln @hobs @futurebird @msh note: I did prompt it to hypothetically integrate various advanced techniques that I’m professionally familiar with.

It didn’t create them out of the ether, but it did a great job at integrating the ideas in a way that as far as I know no LLM has implemented. It was surprising.

These are neat tools when leveraged properly, but they’re not self aware or conscious or truly self learning in the way we are, and may never be, at least until a technological breakthrough

@ceoln @hobs @futurebird @msh to paraphrase what I got, it went something like “let’s explore this esoteric area of computer science”, to give it context, then “how might you apply that to improve x y z in a hypothetical future version” kind of thing, and it gave me options - options that definitely were not quotes from stack overflow, barely anyone does the type of programming I was referencing.

It was a fascinating experiment to find the edges of the tool and see where they could be expanded.

@moirearty

Yeah, I've seen sort of the same kind of thing, but my impression is that it's what you'd get if you did a web search on "ways to improve complex computing systems" and took the top five hits. Correct but obvious stuff; nothing that's going to lead to an exponential self-improvement.

And that's one of the sticking points, really. LLMs, by architecture and design and technology, say whatever is most likely. And that means, in general, whatever is already most frequent in their training set, not some new amazing thing.

So they aren't going to make up fresh and original new ideas that cause them to become superhumanly powerful and start up that exponential curve toward the singularity. That's pretty much the _opposite_ of what they do..

@hobs @futurebird @msh

@ceoln @hobs @futurebird @msh yeah, in retrospect the clever part was in my prompt, but I was still surprised it figured out where to plausibly integrate it.

You’re right about the autonomous self improvement of course. I think a lot of existing but lesser known cutting edge technology could be integrated into the transformer based architecture to produce a greater whole eventually, but we’ll see.

I’d go back to grad school and explore this myself if it wasn’t so cost prohibitive, haha.

@ceoln

It's not the singularity ... it's the mediocritization-zone...

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.