I agree with everything on this chart.. People tend to have the risks of an "AI war" in the future all backwards...

@freemo >AI can't control humans
What a weird myth. This is already literally happening. YouTube, facebook, etc. are already controlling much of people's lives. And certainly those algorithms *have* the ability to be extremely manipulative, when it comes to things like political opinions or views on certain events and people.
It even gets worse with things like Tinder and other dating platforms, which have control about relationships.

Also the "misaligned goals" is something which I can't agree with. AI is designed for specific purposes, if it fails it gets discarded. Its goals are usually predefined and it is measured in how well it accomplishes them. The issue isn't that its goals are "misaligned", but that they are very well aligned towards something bad.

I think the biggest danger with AI is not what particular abilities it may, or may not, develop, but that people will not give a shit about it. Computers, if used at all, should be a tool, not some extension of your life and what AI is usually seeking to accomplish is not creating better tools, but to bind people to their devices.

But I think the chart addresses some common "movie" misconceptions, although I literally do not believe in computer consciousness.
Follow

@servant_of_the_anime_avatars

> Also the "misaligned goals" is something which I can't agree with. AI is designed for specific purposes, if it fails it gets discarded. Its goals are usually predefined and it is measured in how well it accomplishes them. The issue isn't that its goals are "misaligned", but that they are very well aligned towards something bad.

The problem is you assume the AI can be discarded. There is a fundemental problem in AI theory called "the stopbutton problem". The problem here is that any sufficiently advanced AI would be inherently unable to be simply discarded. So once it is turned on and its had time to evolve before we realize its goals are misaligned with our intent (because we didnt sufficiently define the goal or edge cases) it may be too late as the AI would prevent any attempts at discarding it.

Here is a good video explaining the issue:

youtu.be/3TYT1QfdfsM

@freemo Just pull the plug.

I am not even kidding, unless you give an AI the ability to physically alter the world around it in a way which might enable it to stop you (this is a very bad idea and I highly doubt we are going to see that any time soon).
In fact, just don't give it root access.

I don't believe any of these "AI goes rogue" memes from scifi are even real. This is clearly not what anybody should be concerned about when right now we have AI which is doing its job very well, but that job is highly unnerving.

@servant_of_the_anime_avatars Did you watch the video? Pulling the plug is the same as the stopbutton problem, and that doesnt work with sufficiently advanced AI.

Presuming the AI is significantly smarter than you then it would also be very good at manipulating you and other people. So the AI would simply create a situation where you would not be able to pull the plug, there are countless ways this may happen. The simplest is it may just hack a computer somewhere else in the world and trasnfer itself to it before you realize there is a problem. Another may be that it hides its true intent from you so you dont feel compelled to shut it down and by the time you realize it has done this its too late and even if you shut it down the damage is already done.

The stopbutton problem has been discussed at length, and no you cant just say "well just pull the plug".

@freemo >The simplest is it may just hack a computer somewhere else in the world and trasnfer itself to it before you realize there is a problem
That is like worrying about what would happen if Aliens were to invade your capital through space magic, while your country is involved in an international conflict.

The AI which *could* "just hack another computer" is not real and it will not be real for a long time. AI which is manipulating your feelings, your daily life and your relationships is *very real* and you can just pull the plug on them.

I really do not like the scifi AI mysticism, AI is very real and it is not magic, it is just software doing what people tell it to do. A neural network is LITERALLY just a bit of linear algebra woth non-linear activator functions in between. It won't hack anything, but it might ruin your life.

@servant_of_the_anime_avatars

> The AI which *could* "just hack another computer" is not real and it will not be real for a long time. AI which is manipulating your feelings, your daily life and your relationships is *very real* and you can just pull the plug on them.

Yes AIs that can take over the world and go rouge are not a problem yet.. but that is the problem which is on the horizon being discussed. For now, yes, you can just pull the plug, but that may not last nearly as long as you think and wont necessarily require a super intelligence to get us there.

We can look at the use of AI in facebook to "fact check" things as a prime example of it.. Sure in theory you **could** just pull the button, but generally the public outcry and demand for the fact checking is exactly why its there in the first place. You may quickly find that the AI's goals are misaligned and by having an AI control the censorship of information becomes destructive but that same AI is self reinforcing as it also can manipulate the public into thinking it is a necessity. So even as it becomes increasingly destructive its own influence on people and the resulting social pressures prevent anyone from pulling the plug. The original goals effectively become misaligned and yet the AI isnt simply discarded.

Its kind of annoying you still havent watched the video, they do explain how the stopbutton problem is an issue even in AI that isnt super intelligent.

@freemo But that is literally my point. Right now people do not care in the least about AI controlling their lives. And as you pointed out *that is the real stop button problem*, of course there is nothing about facebooks algorithms which has any ability to stop someone with access to the right server rooms, a pair of side cutters and an axe, plus a couple of minutes to take it down.

But on the question of misaligned goals, the algorithms have basically one goal and that is keeping your attention. In that, if their goals were misaligned, they are automatically less functional and less of a threat.

But right now a neural network is LITERALLY just matrix multiplication and activation functions. It is not magic, it won't hack you and it will fo what it is told.
Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.