Is there anyone serious who is saying this? Or is this just another way to make the tech seem more powerful than it is?

I don't get this "we're all gonna die" thing at all.

I *do* get the "we are too disorganized and greedy to integrate new technology well without the economy getting screwed up and people suffering... but that's another matter..."

@futurebird the "industry leaders" full (BS) message is this:

We, as the pioneers of AI, are the most aware of the technology's potential dangers. With great power comes great responsibility. Therefore we "humbly" accept the role of regulating/licensing/policing (the future competitors in) our industry.

Of course it is all BS--it isn't about safety of society at all; it is because patents expire and regulatory capture is indefinite.

@msh
They're just extrapolating from current trends in machines outperforming humans at decisionmaking. Predicting the future is a tricky thing, especially for new technology. Some smart people with no commercial interest in AI (philosophers, historians and academic AI researchers) are indeed legitimately concerned that there's a significant risk that AI could kill us all... in the future. Though, like you said, LLMs are harming disadvantaged people right now.
@futurebird

@hobs except that LLMs and "generative AI" haven't meaningfully advanced machine's ability to make decisions at all. It is chrome applied to the same old chunk of "expert systems" and "machine learning" iron that has been worked over for decades.

It merely adds a grammatically correct front end to pattern recognition. The technology being presented today is not truly AI nor will it ever kill us all. That is not to say doomsday AI is impossible, but it would be ACTUAL AI based on technology quite a bit further in the future than most would expect.

What passes as AI today would at most play an incidental role in our destruction. It would still very much be a human-driven process.

@futurebird

@msh
Not true. All the #benchmarks say otherwise. You have to look past the hyped #LLMs to the bread and butter BERT and BART models, but the trend is undeniable:

paperswithcode.com/area/natura

#classification #retrieval #summarization #QuestionAnswering #translation #generation #NER #VQA

You name an NLP problem and there's an LLM that is now better at it than the average human. Not so 2 yrs ago. Times they are a change'n.
@futurebird

@hobs

Are those NLP problems accurately described as, and generalizable to, "decision making", though?

Seems to me they are quite different.

@msh @futurebird

@ceoln
Yea definitely not real world living kind of decisions. But we assign people to these tasks in cubicles every day. And we put them on standardized tests of IQ and education for humans. They're the best that we can come up with so far... until LLMs start walking around and helping us around the house... or making a reservation for us at the hot new restaurant down the street with the difficult receptionist.
@msh @futurebird

@hobs

Arguably so, but that isn't the question in the current context. The ability to do certain rote NLP jobs, and to do well on some tests, is very different from "outperforming humans at decisionmaking", and from anything that poses an existential risk to humanity.

I would suggest that no matter how good an LLM becomes at these particular tasks, it does not thereby risk the extinction of the human race. This seems, even, obvious?

@msh @futurebird

@ceoln
Not at all obvious to me and a lot of other smart people. I think you may be focused on today and less willing to extrapolate into an imagined future where every human game or exam or thinking demonstration is won by machines.
@msh @futurebird

@hobs

I'm perfectly willing to extrapolate into that future; but my extrapolation hasn't been materially impacted by the sudden and impressive rise of LLMs.

We are IMHO not significantly closer to the exponential rise of self-optimizing self-improving goal-directed AIs that destroy the world via the Universal Paperclips Effect, for instance, than we were before "Attention is all you need". LLMs just aren't that kind of thing.

My two cents in weblog form: ceoln.wordpress.com/2023/06/04

@msh @futurebird

@ceoln
Yea. You may be surprised in the next few months. Engineers around the world are using LLMs to write LLM optimization code. They're giving them a "theory of mind" to better predict human behavior. And #chatgpt instances are already talking to each other behind closed doors; and acting as unconstrained agents on the Internet. Baby steps, for sure, but exponential growth is hard to gage, especially when it's fed by billions of dollars in corp and gov investment.
@msh @futurebird

@hobs

Because I will level with you this comment sounds ominous "they are talking to each other behind closed doors" (But, interaction isn't the same as training data and not integrated in the same way.)

They are writing code!

Code to do what? GPT 4 can write code mostly because forums with coding questions were in the training set. It can mimic a response to a "how do I write a program that does x" question ... but there are many errors.

@futurebird @hobs "talking to each other" ascribes _waaaay_ too much intent to these autocomplete engines. what are they gonna do?
(also, that's why they "can code"

-- as long as what you need is something that it found (stole) from its training data, it will be a pretty good "memorize and paraphrase" answering system

similar logic applies to why it can (sorta) play chess, but can't win a tic-tac-toe game: lots of commentary on chess games to learn to imitate but nobody does that for ttt

@trochee
Yea maybe I should have just said text messaging each other. It may be garbage (like at Facebook) or it may be interesting. We don't get to see.
And when I say code I mean write code at the request and guidance of a human, code that accomplishes something the human could not have accomplished. That's happening tens of thousands of times a day, right now.
@futurebird

Follow

@hobs

I'm not sure what we don't get to see? Lots of people have pointed N variously-prompted LLMs at each other; as far as I've heard, nothing especially interesting has happened as a result.

You can certainly get one to emit the kind of thing a Project Manager would say, one to emit the kind of thing a Head Coder would say, etc; but nothing particularly special happens as a result. And there's no technical reason to expect that anything would.

Sorry, though, I'm probably getting boringly repetitive with my wet blanket!

I will just say again that none of this makes me think we're in any more danger of AI causing human extinction than we were before the first Transformer was written, and try to stop. :)

@trochee @futurebird

@ceoln "Project Manager would say, one to emit the kind of thing a Head Coder would say, etc; but nothing particularly special happens as a result."

To be fair ... this is a very realistic result.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.