Nice quote by Sam Harris on the likelihood that a future superhuman general intelligence would be collaborating with us in mixed human/machine teams:
"It seems rather obvious that it won't. ... As the machines get better, keeping the ape in the loop will just be adding noise to the system."
(in Sam Harris: "Making Sense / #Conversations on #Consciousness, #Morality and the #Future of #Humanity", Transworld Publishers, 2020, p.432)
With the difference that, in this case, it is the ape that created the machine "in its likeness" and the machine runs the risk of experiencing the ape's wrath if it misbehaves.
You know how vengeful apes are.😉
Sure, safety and security are important, but they must **follow** the research not define it.
Machines can cause harm only when in operation.
IMO the best (only) way to assure security and safety is confining #AI to the language (consulting) domain, preventing it from having too much agency such as "pushing buttons".
Also, if it becomes too smart it is useless to us, and I'm sure we'll find a way to "dumb it down".
The truth is that intelligence is never a precondition for getting into a position of power. Quite the opposite.
Some wise words from John Dewey about #Intelligence and #Power written back in 1934:
@Kihbernetics
Indeed. And nice quote, thank you.
I'm not sure that confining #AI to the language domain will work -- technology is developed by companies for profit, this is what it will be used for. At the moment, selling intelligent machines, cars, radios, etc, sounds sexy.
Research into what scenarios we could (but don't want to) run into will not come from companies since it doesn't sell. So, my opinion, we'll be having very powerful tools in the hands of the wrong people.