Is there anyone serious who is saying this? Or is this just another way to make the tech seem more powerful than it is?

I don't get this "we're all gonna die" thing at all.

I *do* get the "we are too disorganized and greedy to integrate new technology well without the economy getting screwed up and people suffering... but that's another matter..."

@futurebird the "industry leaders" full (BS) message is this:

We, as the pioneers of AI, are the most aware of the technology's potential dangers. With great power comes great responsibility. Therefore we "humbly" accept the role of regulating/licensing/policing (the future competitors in) our industry.

Of course it is all BS--it isn't about safety of society at all; it is because patents expire and regulatory capture is indefinite.

@msh
They're just extrapolating from current trends in machines outperforming humans at decisionmaking. Predicting the future is a tricky thing, especially for new technology. Some smart people with no commercial interest in AI (philosophers, historians and academic AI researchers) are indeed legitimately concerned that there's a significant risk that AI could kill us all... in the future. Though, like you said, LLMs are harming disadvantaged people right now.
@futurebird

@hobs except that LLMs and "generative AI" haven't meaningfully advanced machine's ability to make decisions at all. It is chrome applied to the same old chunk of "expert systems" and "machine learning" iron that has been worked over for decades.

It merely adds a grammatically correct front end to pattern recognition. The technology being presented today is not truly AI nor will it ever kill us all. That is not to say doomsday AI is impossible, but it would be ACTUAL AI based on technology quite a bit further in the future than most would expect.

What passes as AI today would at most play an incidental role in our destruction. It would still very much be a human-driven process.

@futurebird

@msh
Not true. All the #benchmarks say otherwise. You have to look past the hyped #LLMs to the bread and butter BERT and BART models, but the trend is undeniable:

paperswithcode.com/area/natura

#classification #retrieval #summarization #QuestionAnswering #translation #generation #NER #VQA

You name an NLP problem and there's an LLM that is now better at it than the average human. Not so 2 yrs ago. Times they are a change'n.
@futurebird

@hobs

Are those NLP problems accurately described as, and generalizable to, "decision making", though?

Seems to me they are quite different.

@msh @futurebird

@ceoln
Yea definitely not real world living kind of decisions. But we assign people to these tasks in cubicles every day. And we put them on standardized tests of IQ and education for humans. They're the best that we can come up with so far... until LLMs start walking around and helping us around the house... or making a reservation for us at the hot new restaurant down the street with the difficult receptionist.
@msh @futurebird

@hobs

Arguably so, but that isn't the question in the current context. The ability to do certain rote NLP jobs, and to do well on some tests, is very different from "outperforming humans at decisionmaking", and from anything that poses an existential risk to humanity.

I would suggest that no matter how good an LLM becomes at these particular tasks, it does not thereby risk the extinction of the human race. This seems, even, obvious?

@msh @futurebird

@ceoln
Yea, you have a higher bar for "decisionmaking" than I do. And perhaps employers. After all most employers are rapidly supplanting human decisionmaking with algorithms, including LLMs. My software and my developers are being replaced by my customers as we speak. If we don't put LLMs into our plans we don't win contacts.
@msh @futurebird

@hobs

"After all most employers are rapidly supplanting human decisionmaking with algorithms, including LLMs. My software and my developers are being replaced by my customers as we speak."

Are they really, or are they just claiming to because everyone else is? Sincere question, I don't know the answer, and it seems important.

I know some very specific decision making (e.g. loans) has been offloaded to AI, probably more than it should have been, but are people doing it more now than before LLMs?

(I'm aware of many extremely vague and enthusiastic reports of this, but they are very short on facts.)

"If we don't put LLMs into our plans we don't win contacts."

Sure, but that's marketing psychology, and again I think irrelevant to the question of extinction risks?

@msh @futurebird

@ceoln
I'm just trying to think of concrete human whitecolar tasks that LLMs are now doing more and more of.
@msh @futurebird

Follow

@hobs

Yeah, I'd love to hear of any well-attested examples!

But again, I'm not sure "LLMs are replacing humans at some whitecollar tasks" is directly relevant to claims of extinction risks? At least I'm not sure how that would work.

@msh @futurebird

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.