theguardian.com/technology/art
"Sutskever, who was also OpenAI’s chief scientist, wrote in his X post announcing his departure that he was confident OpenAI “will build AGI that is both safe and beneficial” under its current leadership."
"AGI" is completely made up bullshit. That is useful to keep in mind when reading anything these people say.

What a surprise. A guy whose entire job is dependent on finding "risks" (concrete or not) to talk about reckons that the CEO hasn't been listening to everything he has to say. I wonder why.

Follow

I think the thing to remember is that it is fairly easy for looking for "risks" to bite a thousand bites out of a product, especially this sort of product.

From what we've seen of their products, they're very sensitive to just about anything, and it hurts the quality of their products.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.