Is there anyone serious who is saying this? Or is this just another way to make the tech seem more powerful than it is?

I don't get this "we're all gonna die" thing at all.

I *do* get the "we are too disorganized and greedy to integrate new technology well without the economy getting screwed up and people suffering... but that's another matter..."

@futurebird the "industry leaders" full (BS) message is this:

We, as the pioneers of AI, are the most aware of the technology's potential dangers. With great power comes great responsibility. Therefore we "humbly" accept the role of regulating/licensing/policing (the future competitors in) our industry.

Of course it is all BS--it isn't about safety of society at all; it is because patents expire and regulatory capture is indefinite.

@msh
They're just extrapolating from current trends in machines outperforming humans at decisionmaking. Predicting the future is a tricky thing, especially for new technology. Some smart people with no commercial interest in AI (philosophers, historians and academic AI researchers) are indeed legitimately concerned that there's a significant risk that AI could kill us all... in the future. Though, like you said, LLMs are harming disadvantaged people right now.
@futurebird

@hobs except that LLMs and "generative AI" haven't meaningfully advanced machine's ability to make decisions at all. It is chrome applied to the same old chunk of "expert systems" and "machine learning" iron that has been worked over for decades.

It merely adds a grammatically correct front end to pattern recognition. The technology being presented today is not truly AI nor will it ever kill us all. That is not to say doomsday AI is impossible, but it would be ACTUAL AI based on technology quite a bit further in the future than most would expect.

What passes as AI today would at most play an incidental role in our destruction. It would still very much be a human-driven process.

@futurebird

@msh
Not true. All the #benchmarks say otherwise. You have to look past the hyped #LLMs to the bread and butter BERT and BART models, but the trend is undeniable:

paperswithcode.com/area/natura

#classification #retrieval #summarization #QuestionAnswering #translation #generation #NER #VQA

You name an NLP problem and there's an LLM that is now better at it than the average human. Not so 2 yrs ago. Times they are a change'n.
@futurebird

@hobs

Are those NLP problems accurately described as, and generalizable to, "decision making", though?

Seems to me they are quite different.

@msh @futurebird

@ceoln
Yea definitely not real world living kind of decisions. But we assign people to these tasks in cubicles every day. And we put them on standardized tests of IQ and education for humans. They're the best that we can come up with so far... until LLMs start walking around and helping us around the house... or making a reservation for us at the hot new restaurant down the street with the difficult receptionist.
@msh @futurebird

@hobs

Arguably so, but that isn't the question in the current context. The ability to do certain rote NLP jobs, and to do well on some tests, is very different from "outperforming humans at decisionmaking", and from anything that poses an existential risk to humanity.

I would suggest that no matter how good an LLM becomes at these particular tasks, it does not thereby risk the extinction of the human race. This seems, even, obvious?

@msh @futurebird

@ceoln
Not at all obvious to me and a lot of other smart people. I think you may be focused on today and less willing to extrapolate into an imagined future where every human game or exam or thinking demonstration is won by machines.
@msh @futurebird

@hobs

I'm perfectly willing to extrapolate into that future; but my extrapolation hasn't been materially impacted by the sudden and impressive rise of LLMs.

We are IMHO not significantly closer to the exponential rise of self-optimizing self-improving goal-directed AIs that destroy the world via the Universal Paperclips Effect, for instance, than we were before "Attention is all you need". LLMs just aren't that kind of thing.

My two cents in weblog form: ceoln.wordpress.com/2023/06/04

@msh @futurebird

@ceoln
Yea. You may be surprised in the next few months. Engineers around the world are using LLMs to write LLM optimization code. They're giving them a "theory of mind" to better predict human behavior. And #chatgpt instances are already talking to each other behind closed doors; and acting as unconstrained agents on the Internet. Baby steps, for sure, but exponential growth is hard to gage, especially when it's fed by billions of dollars in corp and gov investment.
@msh @futurebird

@hobs @ceoln @msh @futurebird It doesn't appear that you know how ChatGPT works; the model is fixed. It does not learn after the original training. It remembers the user prompt and the instructions but has a limited window. They don't have a "theory of mind". Maybe someone could figure out how to give a program such a thing but it wouldn't be an LLM. An LLM takes a sequence of tokens and extends it, and that is all. It knows the structure of text. It doesn't know anything about the world and has no way of learning.

@not2b
Yea. But are you familiar with the vector database craze? It gives LLMs long term memory. It's already a part of many LLM pipelines. I don't know how ChatGPT works. But I know exactly how the open source models work. I augment them and fine tune them. And teach others how to do it. I've been using vector databases for semantic search for 15 years. And using them to augment LMs for 5.
@ceoln @msh @futurebird

@hobs @ceoln @msh @futurebird That is a way to couple an LLM to a search engine. But at least the one Bing has appears to just use the retrieved data as a prefix and then generate a summary. Maybe you are building something better, but it feels like saying the availability of Google search gives me a better memory. Maybe you could say that but it feels like a stretch.

@not2b
Yea. Bing is doing it wrong. The right way is to use LLMs to guess at answers with high temp. Average the embeddings for those random guesses and use that as your semantic search query to create the context passages for your reading comprehension question answering prompt. Works nearly flawlessly. LangChain makes it straightforward and free for individuals. But costly to do at scale for a popular search engine.
@ceoln @msh @futurebird

Follow

@hobs

That is very cool! I've read vague descriptions about how that works; do you have a pointer to a more technical (but still comprehensible!) writeup / paper on how it works, and some kind of evaluation of effectiveness?

@not2b @msh @futurebird

@ceoln @hobs @msh @futurebird I don't, but the best explainer I know about the properties and limitations of LLMs on Mastodon is @simon. I suggest that you follow him and check out his blog.

@ceoln
I think the #BLOOMZ project, #huggingface, #LangChain, #PredictionGuard #Anthropic and others are talking about it on their community slacks/discords. Only person I know doing it right now is Thomas Meschede
@ xyntopia.com and pypi.org/project/pydoxtools and
doxcavator.com/

@not2b @msh @futurebird

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.