I like Jaron Lanier's approach to "#AI". It can "kill us all" as most of all the other #technology we invent. He rises some very interesting points in this article such as:
>Think of people. People are the answer to the problems of bits.
If society, economics, culture, technology, or any other spheres of activity are to serve people, that can only be because we decide that people enjoy a special status to be served.
https://www.newyorker.com/science/annals-of-artificial-intelligence/there-is-no-ai
I think that it is you that might be looking at it from the wrong perspective and attributing humanity where there is none.
For Lanier (and I agree with him on that) #AI is just another #tool developed to do some work for us (or for some of us just serve as a plaything, a refined #tamagotchi).
WRT the "alignment problem", I'm not sure I want somebody else to align my tools for me. Ideally, they should come out of the box with some commonly agreed generic values and knowledge about the world, but after that, I'd like to be able to fine-tune and train them to serve the purpose I need them for.
That solves another problem, that of who is **responsible** for the actual harm their output may inflict on other people. You don't blame the gun for murder, you prosecute the one who pulled the trigger. I don't see why it should be different with AI.
@Kihbernetics @melaniemitchell its not just me, harlan ellison talked about this over 45 years ago in relation to deep space travel and what it would take. he then extended that to life on earth. here's a more recent take on it:
and a rebuttal of the above:
https://www.alignmentforum.org/posts/CoZhXrhpQxpy9xw9y/where-i-agree-and-disagree-with-eliezer
@Kihbernetics @melaniemitchell it has another advantage in that it is not influenced or constrained by emotion. it will always follow its directive no matter what we feel.
@Kihbernetics @melaniemitchell i believe jaron lanier is too human centric. one line he mentions exemplifies this. '“alignment” (is what an A.I. “wants” aligned with what humans want?)'. i believe he is looking at it from the wrong direction, i believe the reality will become 'are humans aligned with what ai has been configured and 'learned' as the optimal path forward? ai never has to sleep, so will eventually surpass us in knowledge and ability . . .