chatgpt, --
The faults it has make it a very depressing development for me. It's able to produce very human-sounding prose and is unable not to inject false statements into ~anything it outputs.
First, this is very depressing for people who actually read what they see and remember small tidbits that were mentioned. This makes it way more likely that they're garbage.
Secondly and more importantly, this is asymmetric tool of disinformation creation. It can be used to generate plausibly-sounding wrong statements much more easily than it could be used to generate correct statements. There are people/organisations who wish to do former. Thus, this will remove many of the factchecking heuristics that work today. (This is a fundamental inescapable problem for factchecking of news, and still a bad problem for factchecking of statements about empirically available knowledge.)
I expect us to get way more hard-to-filter spam and not get much in return. I expect Sybil problem to start appearing in places where hardness of simulating a human prevented it from appearing.
chatgpt, --
@jhertzli But then how do they verify them? You ~can't verify them all from first principles, so it encourages reliance on authorities. (Maybe I'm overly pessimistic and these plausibly-sounding claims will still be self-inconsistent in noticeable ways.)
Alas, I think a larger problem is that verifying them, however you do that, takes time. It's now easy to generate such harder-to-recognize spam.