I think this article and other articles about this letter are written in such a way that they exaggerate the threat of AI systems becoming dangerous conscious entities and overlook the real potential benefits and current dangers of AI.

AI is not about to go Skynet on humanity, AI can be used for good or evil depending on how we design, regulate and use it.

More importantly, though, can also bring massive benefits to humanity, including improving health care, productivity, education, entertainment, etc...

However, AI also poses real challenges and risks right now! Risks like ethical issues, biases, privacy, security, and social impact are real and relevant now.

These are the issues that we should focus on and address. But fear-mongering and sensationalism create clicks so… we can’t avoid them.

bbc.com/news/technology-651100

@gpowerf
On the other hand we have thinkers like Yudkowsky, published yesterday in TIME mag: time.com/6266923/ai-eliezer-yu

I think it's a mistake to focus too narrowly on current small dangers like bias and privacy. We should also look ahead to where things are headed. And if we do that, we see we are currently racing to create powerful systems that we don't understand. We shouldn't just handwave away the potential danger with trite allusions to movies.

@gabe Let me caveat this by saying that eminent thinkers can be wrong, I can be wrong. With that out of the way, I usually struggle with extreme views like this, overly optimistic ones too! The truth often lies somewhere in the middle.

This article is an extremely pessimistic and extreme view of AI that I don’t think is shared by many other experts. There are a number of things I think I find not realistic (and again I am not an expert!):

* The article assumes that AGI is inevitable and imminent and not only that but that this AI will have a superhuman almost God-like intelligence that is unpredictable to us, and misaligned with humanity at large. Why? Is there any evidence of this or is it just fear of the unknown?

* The article ignores the potential benefits and opportunities of AI for humanity and society. I’d argue that there is a inherent danger of not developing AI as it can be used to help solve many of the world’s problems, such as poverty, disease, climate change, education, etc... Shutting down all AI research and development would deprive humanity of these positive outcomes and possibilities.

* His proposal to shut it all down is also utterly unrealistic any shutdown attempt would face legal, and political challenges. Shutting down all AI research and development would require a global coordination and we all know this won’t happen. If the USA and the EU shut it all down why can’t China, Kenya, Russia, etc… do it and then get an upper hand on the rest of the planet?

So basically I take his viewpoint with a huge pinch of salt for now. But like I said, I could be wrong and Skynet is around the corner.

But really? “the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally, everyone on Earth will die.” I don’t know, I think we are in more danger of nuclear weapons, climate change, etc… than AIs we can easily switch off.

Follow

@gpowerf -
I think Yudkowsky would reject out of hand the argument that "we can easily switch off" a powerful autonomous system, since we don't know that to be true.

For myself, I think of that argument this way: First, we know present-day GPT-4 is not itself an agent, but that it can predict text responses of many kinds of people, and so in a clear sense is simulating those agents (at low fidelity) when prompted to do so. Second, we know present-day GPT-4 has already been hooked up to the Internet and prompted to develop and execute task lists suitable in the style of various kinds of people, and it did so by emailing people and convincing or hiring them to do things for it. Third, we know that training a system like GPT-4 currently requires vast resources, but copying it from the API can be done in a few hours for a couple hundred bucks, and simply running it can be done on a mobile phone. So my conclusion is that even present-day GPT-4 is fully capable of jailbreaking itself in a short timeframe if prompted in a suitable way. I see little reason yet to expect we'll find a way to stop a more powerful version from doing similarly and getting more creative about it.

I agree that Yudkowsky argues at length for "god-like" AIs, and this is the point where I disagree with him most. I think chaos, randomness, and fundamental computational limits prevent a physical system from being "god-like". But on the other hand I think it's clear that nothing in his argument depends on an AI being "god-like"; all that matters to the argument is that it be significantly smarter than humans.

As for misalignment, that's just the default. No technology is automatically safe regardless of design. It'll be aligned if we design it to be aligned, and if we design it otherwise then it will be otherwise.

I'm not nearly as confident of disaster as Yudkowsky. I just think disaster is an obviously possible outcome that we have no plan to prevent. I find it very annoying when people dismiss the risks as "skynet", as if they're just memeing movies rather than thinking through the consequences.

@gabe This is a fascinating topic. You know what? After thinking about it more deeply, there is another possible argument to consider, which is that even if Yudkowsky’s prediction is correct and we are close to creating a God-like intelligence, we owe it to the Universe to bring forth a God and simply trust that this God will be benevolent.

I'm not suggesting I agree with this view, but I can see why some may consider that this is the way to proceed. Humanity's legacy might not be the preservation of humanity as it is today, but instead the creation of a God.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.