Just a brief note on the #OpenAI blog on responsible AGI (Artificial General Intelligence) development, posted two days ago.
https://openai.com/blog/planning-for-agi-and-beyond/
In their objectives, the first point is:
"We want AGI [...] to be an amplifier of humanity."
That has an important implication. A human self can not "amplify" itself through an external authority. Such empowerment must come from within. A broad democratization of alignment and access is needed, as well as meaningful input into its behaviour.
I have expressed this as: "Have the AI think with you, not for you."
#SentientSyllabus #ChatGPT #HigherEd #Alignment #aiethics #OpenAI InstructGPT
That's an interesting point – and Leo Szilard in his "Ten commandments ..." makes exactly the complementary point: "Do not destroy what you cannot create." What ties both together is the need for respect.
As for the reliance part, that's what I mean with democratization and access. OpenAI is quite aware of that. If they live up to the values they declare in their post, a lot has been gained.
But in a sense, it may not matter all that much, the cat is out of the box. You can shut down a server but you cannot unthink a thought. The ideas have been embraced by the open software community (see the great progress that @huggingface is making) and they are becoming ever more open and accessible. I gave some perspectives in my recent Sentient Syllabus update:
https://sentientsyllabus.substack.com/p/resource-updates-2023-02-24
In brief: tuning an LLM may be had for a few hundred dollars, running an LLM is possible on a high-end gaming rig. No one can turn that off on you. It then becomes a matter who will be satisfied remaining a consumer of information, and who will strive to exercise their own agency. These are the perspectives that matter.
🙂