On the current LLMchat bot mania:
1. The claim that they're glorified Markov chains, just stats engines, is wrong.
2. Some critics are essentially echoing Searle’s Chinese Room argument, which is wrong: en.wikipedia.org/wiki/Chinese_
3. The VCs and BigCos are way out over their skis. Billions will be pissed away.
4. We don’t know yet whether the current obvious probs with the tech are fixable. Maybe, maybe not.
5. Wait-and-see is an extremely rational position at this point in history.

@timbray What are they if not statistical models of the 'best' next word in a string of text?

@edwiebe My understanding is that they are the output of convolutional neural networks. Of which I don't have a deep understanding, but I don't think the mechanisms are primarily statistical.

Follow

@timbray @edwiebe Which credible critic of LLMs is saying that they are "glorified Markov chains"? Haven't heard that. And they aren't convolutional neural networks. They're composed of multiple transformers which most certainly are not Markov chains but they are entirely statistical.

@twitskeptic @timbray @edwiebe It’s not really about “credible”, it’s more “loud” and “influential”. There are quite a few with long reach who have not bothered to find out how LLMs work but are quite happy holding forth about them.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.