https://www.theguardian.com/technology/article/2024/may/18/openai-putting-shiny-products-above-safety-says-departing-researcher
"Sutskever, who was also OpenAI’s chief scientist, wrote in his X post announcing his departure that he was confident OpenAI “will build AGI that is both safe and beneficial” under its current leadership."
"AGI" is completely made up bullshit. That is useful to keep in mind when reading anything these people say.
I think the thing to remember is that it is fairly easy for looking for "risks" to bite a thousand bites out of a product, especially this sort of product.
From what we've seen of their products, they're very sensitive to just about anything, and it hurts the quality of their products.