I recognize that ignoring human costs is incredibly convenient, but I wish more people could really internalize that the only thing keeping LLMs from routinely spitting out the monstrous garbage in their training data is the armies of underpaid human moderators, usually in developing countries, seeing and filtering out the worst things in the world for pennies an hour. LLMs rely on classic extractive-resource colonialism, but the resource being extracted is psychological health and safety.

@mhoye @gvwilson do you have a concurrent citation for this? Certainly seems consistent with what I know about the players involved but i haven’t seen reporting on this specifically yet

Follow

@mhoye @glyph @gvwilson
One just can't invent that shit...

> OpenAI’s outsourcing partner in Kenya was **Sama**, a San Francisco-based firm that employs workers in Kenya, Uganda and India to label data for Silicon Valley clients like Google, Meta and Microsoft.
> Sama markets itself as an “ethical AI” company and claims to have helped lift more than 50,000 people out of poverty.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.