Hey y'all, i know you know this, but while you definitely shouldn't use GPTs for legal research, also don't rely on GPTs for RESEARCH, PERIOD.

They are neither giving nor TRYING to give you intersubjectively associated and derived facts; they are not even remixing factual CONCEPTS into new forms.

They are modelling human biases out into digestible bullshit with a statistically-determined high probability of being swallowed.

That is all.

They don't have to be this way, but, at present, the people making them have no incentive to change them. So. Don't lean on them for fact stuff. It's not what they do.

Follow

@Wolven

Indeed!

They do *sort of* have to be this way, in the sense that we don't have a solution to the factuality problem in large language models. What they do, architecturally, is spout stuff that sounds sort of like their training set, but without any notion of true or false.

So as far as this particular specific technology is concerned, they do have to be this way; it's just what they do! The problem isn't as much incentive as it is basic knowledge. People are working hard on altering or adding to the basic technology to make them emit truths where that's important, but it's definitely an unsolved problem.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.