I don't even know where to start with this one. Apparently the mental health service kokocares.org ran a multi-day trial where their "peer supporters" used ChatGPT to craft their responses to the people they were providing with mental health support.

businessinsider.com/company-us

Co-founder Rob Morris tweeted a description of the trial in this thread, in which he explicitly describes the activity as an experiment.

twitter.com/RobertRMorris/stat

Morris later tried to walk back his original claims and suggested that he had informed consent from all participants.

But it appears from his own claims that only the "peer supporters" were informed.

According to Business Insider, `Koko users were not initially informed the responses were developed by a bot, and "once people learned the messages were co-created by a machine, it didn't work," Morris wrote on Friday.'

What makes this different than your average tech-company-does-very-shitty-thing-using-AI-without-informed-consent story is that the co-founder was so oblivious to any notion of research ethics, consent, mental health ethics, etc., that he wrote a long thread describing this experiment and was caught completely off guard when people reacted with shock.

I think this example tells us a lot about whether we can trust tech to act ethically: looks like they don't know how, even when they want to.

Just in case you want to do, say, experiment on 4000 unaware people suffering mental health crises.

Follow

@ct_bergstrom At what point does the CDC, the department of justice, and the FBI get involved with this. I mean, I’m no lawyer, but RICO laws or health emergency or something. @FBI

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.