Follow

chatgpt, -- 

The faults it has make it a very depressing development for me. It's able to produce very human-sounding prose and is unable not to inject false statements into ~anything it outputs.

First, this is very depressing for people who actually read what they see and remember small tidbits that were mentioned. This makes it way more likely that they're garbage.

Secondly and more importantly, this is asymmetric tool of disinformation creation. It can be used to generate plausibly-sounding wrong statements much more easily than it could be used to generate correct statements. There are people/organisations who wish to do former. Thus, this will remove many of the factchecking heuristics that work today. (This is a fundamental inescapable problem for factchecking of news, and still a bad problem for factchecking of statements about empirically available knowledge.)

I expect us to get way more hard-to-filter spam and not get much in return. I expect Sybil problem to start appearing in places where hardness of simulating a human prevented it from appearing.

@rysiek I think this is similar-yet-distinct to your recent thread on various ML-based generators of something.

chatgpt, -- 

@robryk It may immunize people against believing plausible-sounding claims.

chatgpt, -- 

@jhertzli But then how do they verify them? You ~can't verify them all from first principles, so it encourages reliance on authorities. (Maybe I'm overly pessimistic and these plausibly-sounding claims will still be self-inconsistent in noticeable ways.)

Alas, I think a larger problem is that verifying them, however you do that, takes time. It's now easy to generate such harder-to-recognize spam.

@robryk indeed! Great minds think alike. Sorry if my thread triggered you here, I empathize with the depressive feeling.

@rysiek

> Sorry if my thread triggered you here, I empathize with the depressive feeling.

No worries, you didn't. (I think that in my case repetitions of something I know affect me way less than sad updates to the state of knowledge and situations where I am obligated not to correct incorrect factual informations).

That said, I was overly pessimistic when I said that we will get ~nothing useful out of it. I've seen plausible reports of ChatGPT being used in a classroom context as a first source of pointers when people are stuck while trying to understand something. Notably improtant is that (a) it's only first source -- if it's still/more confusing people have a human TA (b) it's used only to get pointers at what to look at in sources that are way less likely to spuriously lie.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.