"He has identified three levels of human signals that we’ve lost in adopting AI into our communication. The first level is that of basic humanity signals, cues that speak to our authenticity as a human being like moments of vulnerability or personal rituals, which say to others, “This is me, I’m human.” The second level consists of attention and effort signals that prove “I cared enough to write this myself.” And the third level is ability signals which show our sense of humor, our competence, and our real selves to others. It’s the difference between texting someone, “I’m sorry you’re upset” versus “Hey sorry I freaked at dinner, I probably shouldn’t have skipped therapy this week.” One sounds flat; the other sounds human."
https://www.theverge.com/openai/686748/chatgpt-linguistic-impact-common-word-usage
While I'm all for criticizing LLMs / AI in every way possible, I am a bit worried about potential collateral damage of arguments like this.
Specifically, it seems to me that another way of stating this article's thesis is that chatGPT doesn't sound like a *neurotypical* human.
Guess who else doesn't sound like a neurotypical human and really could do without people questioning their agency and validity as real human beings.
@jmcclure which of the three characteristics do you think a neurodivergent individual would not exhibit? If anything, their character would be more extraordinary. In contrast, LLMs are overly ordinary.
If anyone risks having such a trait, I suspect it could be psychopaths, but that's much more an extremely superficial speculation rather than a meaningful hypothesis.