Sure, safety and security are important, but they must **follow** the research not define it.
Machines can cause harm only when in operation.
IMO the best (only) way to assure security and safety is confining #AI to the language (consulting) domain, preventing it from having too much agency such as "pushing buttons".
Also, if it becomes too smart it is useless to us, and I'm sure we'll find a way to "dumb it down".
The truth is that intelligence is never a precondition for getting into a position of power. Quite the opposite.
Some wise words from John Dewey about #Intelligence and #Power written back in 1934:
Indeed. And nice quote, thank you.
I'm not sure that confining #AI to the language domain will work -- technology is developed by companies for profit, this is what it will be used for. At the moment, selling intelligent machines, cars, radios, etc, sounds sexy.
Research into what scenarios we could (but don't want to) run into will not come from companies since it doesn't sell. So, my opinion, we'll be having very powerful tools in the hands of the wrong people.
@Kihbernetics The quote's from a conversation between S.Harris and M.Tegmark, they discuss a hypothetical scenario where AI has become more intelligent than humans. At that point, switching the AI off is not an option any more. They arrive at an interesting conclusion: If we fund lots of AI research (like we do), we should also fund lots of AI security research (like we do not at all).