Recently here has been a lot of excitement around this paper by Piantadosi claiming that Chomsky's approach to language is now refuted by large language models (lingbuzz.net/lingbuzz/007180). And I am quite sympathetic to this idea, so I decided to give it a read. But... am I the only only one to find this paper deeply flawed? (1/4)

From a conceptual point of view first: sometimes LLMs are considered "models", sometimes they are called "scientific theories", sometimes they even seem to be an instance above scientific theories as they can "search for theories in an unrestricted space". (2/4)

More basically, Piantadosi argues that LLMs can manipulate semantic content. As an illustration, he presents chatGPT's response when asked to generate new sentences like Chomsky's "colorless green ideas sleep furiously". (3/4)

... but for me this example shows quite the opposite: chatGPT succesfully copies the syntactic structure of the sentence, but completely misses the point of its nonsense: some parts make sense, others don't... (4/4)

@leovarnet Interesting recent discussion of . A couple of things strike me too on a quick read:

1. There is a big claim in there (based on the Baroni 2022 reference) that essentially "fitted model = theory". To me that feels like a way of not just moving, but actually redefining the goalposts.

2. A lot of the criticism of Chomsky's work is based on weak arguments. Lots of "Chomsky says A but we see B, therefore A is undermined." But as far as I can tell A and B are not necessarily mutually exclusive.

The Chomsky criticism in the paper reminds me a little of the Bayesian/Frequentist conflicts that I use to see when I was in academia (statistics).

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.