chatgpt, --
The faults it has make it a very depressing development for me. It's able to produce very human-sounding prose and is unable not to inject false statements into ~anything it outputs.
First, this is very depressing for people who actually read what they see and remember small tidbits that were mentioned. This makes it way more likely that they're garbage.
Secondly and more importantly, this is asymmetric tool of disinformation creation. It can be used to generate plausibly-sounding wrong statements much more easily than it could be used to generate correct statements. There are people/organisations who wish to do former. Thus, this will remove many of the factchecking heuristics that work today. (This is a fundamental inescapable problem for factchecking of news, and still a bad problem for factchecking of statements about empirically available knowledge.)
I expect us to get way more hard-to-filter spam and not get much in return. I expect Sybil problem to start appearing in places where hardness of simulating a human prevented it from appearing.
@rysiek I think this is similar-yet-distinct to your recent thread on various ML-based generators of something.
> Sorry if my thread triggered you here, I empathize with the depressive feeling.
No worries, you didn't. (I think that in my case repetitions of something I know affect me way less than sad updates to the state of knowledge and situations where I am obligated not to correct incorrect factual informations).
That said, I was overly pessimistic when I said that we will get ~nothing useful out of it. I've seen plausible reports of ChatGPT being used in a classroom context as a first source of pointers when people are stuck while trying to understand something. Notably improtant is that (a) it's only first source -- if it's still/more confusing people have a human TA (b) it's used only to get pointers at what to look at in sources that are way less likely to spuriously lie.