Does ChatGPT gablergh?
rys.io/en/165.html

> “Well you can’t say it doesn’t think” — the argument goes — “since it’s so hard to define and delineate! Even ants can be said to think in some sense!”

> This is preposterous. Instead of accepting the premise, we should fire right back: “you don’t get to use the term ‘think’ unless you first define it yourself”. And recognize it for what it is — thinly veiled hype-generation attempt using badly defined terms for marketing.

#AI

@rysiek It sounds like you are describing an example of en.wikipedia.org/wiki/Motte-an (I mention this because knowing names for things is sometimes useful, like in that old joke/anecdote about flowers that are more easily recognized if one can name them.)

Follow

@rysiek Also, I'm somewhat conflicted about what social contracts I'd want around defining things.

On one hand, being explicitly imprecise has value. This is ~always part of figuring out what precise statements are true.

On another, being imprecise trashes modus ponens (because you end up doing the logical implication equivalent of the game of telephone).

An obvious contract that seems to satisfy both is to expect everyone to be explicit when they are imprecise. However, a failure mode of that is that people often don't want to bother being precise and this doesn't create ~any incentives not to be imprecise all the time.

@robryk fair points, and thank you for motte-and-bailey fallacy — not exactly what I was talking about, but it's definitely relevant.

What I object to is AI hypers using undefined terms and then using this lack of definition against those who disagree with them.

Let's call my argument "Russels Thinking Teapot" — the fact that one cannot prove GPT (or a china teapot orbiting the Sun between Earth and Mars) does not think does not mean it actually does.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.