@rysiek It sounds like you are describing an example of https://en.wikipedia.org/wiki/Motte-and-bailey_fallacy (I mention this because knowing names for things is sometimes useful, like in that old joke/anecdote about flowers that are more easily recognized if one can name them.)
@robryk fair points, and thank you for motte-and-bailey fallacy — not exactly what I was talking about, but it's definitely relevant.
What I object to is AI hypers using undefined terms and then using this lack of definition against those who disagree with them.
Let's call my argument "Russels Thinking Teapot" — the fact that one cannot prove GPT (or a china teapot orbiting the Sun between Earth and Mars) does not think does not mean it actually does.
@rysiek Also, I'm somewhat conflicted about what social contracts I'd want around defining things.
On one hand, being explicitly imprecise has value. This is ~always part of figuring out what precise statements are true.
On another, being imprecise trashes modus ponens (because you end up doing the logical implication equivalent of the game of telephone).
An obvious contract that seems to satisfy both is to expect everyone to be explicit when they are imprecise. However, a failure mode of that is that people often don't want to bother being precise and this doesn't create ~any incentives not to be imprecise all the time.