I think this article and other articles about this letter are written in such a way that they exaggerate the threat of AI systems becoming dangerous conscious entities and overlook the real potential benefits and current dangers of AI.
AI is not about to go Skynet on humanity, AI can be used for good or evil depending on how we design, regulate and use it.
More importantly, though, #AI can also bring massive benefits to humanity, including improving health care, productivity, education, entertainment, etc...
However, AI also poses real challenges and risks right now! Risks like ethical issues, biases, privacy, security, and social impact are real and relevant now.
These are the issues that we should focus on and address. But fear-mongering and sensationalism create clicks so… we can’t avoid them. #chatGTP
@gpowerf
On the other hand we have thinkers like Yudkowsky, published yesterday in TIME mag: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
I think it's a mistake to focus too narrowly on current small dangers like bias and privacy. We should also look ahead to where things are headed. And if we do that, we see we are currently racing to create powerful systems that we don't understand. We shouldn't just handwave away the potential danger with trite allusions to movies.
@gabe Let me caveat this by saying that eminent thinkers can be wrong, I can be wrong. With that out of the way, I usually struggle with extreme views like this, overly optimistic ones too! The truth often lies somewhere in the middle.
This article is an extremely pessimistic and extreme view of AI that I don’t think is shared by many other experts. There are a number of things I think I find not realistic (and again I am not an expert!):
* The article assumes that AGI is inevitable and imminent and not only that but that this AI will have a superhuman almost God-like intelligence that is unpredictable to us, and misaligned with humanity at large. Why? Is there any evidence of this or is it just fear of the unknown?
* The article ignores the potential benefits and opportunities of AI for humanity and society. I’d argue that there is a inherent danger of not developing AI as it can be used to help solve many of the world’s problems, such as poverty, disease, climate change, education, etc... Shutting down all AI research and development would deprive humanity of these positive outcomes and possibilities.
* His proposal to shut it all down is also utterly unrealistic any shutdown attempt would face legal, and political challenges. Shutting down all AI research and development would require a global coordination and we all know this won’t happen. If the USA and the EU shut it all down why can’t China, Kenya, Russia, etc… do it and then get an upper hand on the rest of the planet?
So basically I take his viewpoint with a huge pinch of salt for now. But like I said, I could be wrong and Skynet is around the corner.
But really? “the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally, everyone on Earth will die.” I don’t know, I think we are in more danger of nuclear weapons, climate change, etc… than AIs we can easily switch off.
@gabe This is a fascinating topic. You know what? After thinking about it more deeply, there is another possible argument to consider, which is that even if Yudkowsky’s prediction is correct and we are close to creating a God-like intelligence, we owe it to the Universe to bring forth a God and simply trust that this God will be benevolent.
I'm not suggesting I agree with this view, but I can see why some may consider that this is the way to proceed. Humanity's legacy might not be the preservation of humanity as it is today, but instead the creation of a God.