I don't even know where to start with this one. Apparently the mental health service kokocares.org ran a multi-day trial where their "peer supporters" used ChatGPT to craft their responses to the people they were providing with mental health support.
https://www.businessinsider.com/company-using-chatgpt-mental-health-support-ethical-issues-2023-1
Co-founder Rob Morris tweeted a description of the trial in this thread, in which he explicitly describes the activity as an experiment.
https://twitter.com/RobertRMorris/status/1611450197707464706?s=20&t=ndcUoODJItgoTH8mlf8a9Q
Morris later tried to walk back his original claims and suggested that he had informed consent from all participants.
But it appears from his own claims that only the "peer supporters" were informed.
According to Business Insider, `Koko users were not initially informed the responses were developed by a bot, and "once people learned the messages were co-created by a machine, it didn't work," Morris wrote on Friday.'
What makes this different than your average tech-company-does-very-shitty-thing-using-AI-without-informed-consent story is that the co-founder was so oblivious to any notion of research ethics, consent, mental health ethics, etc., that he wrote a long thread describing this experiment and was caught completely off guard when people reacted with shock.
I think this example tells us a lot about whether we can trust tech to act ethically: looks like they don't know how, even when they want to.
It reminds me of the entire Brian Wansink story.
@ct_bergstrom At what point does the CDC, the department of justice, and the FBI get involved with this. I mean, I’m no lawyer, but RICO laws or health emergency or something. @FBI