Follow

I agree with everything on this chart.. People tend to have the risks of an "AI war" in the future all backwards...

@freemo the main concern with AI in near future is not risk of superintelligence but the power the big corporations and governements can reach over citizens.

@miklo Possibly, I think we would need to speak of specific examples in the case of the present to see if I agree or disagree with those concerns (the vast majority I've heard I find are overplayed, the bigger concerns seem to be from government rather than corp)

@freemo @miklo superintelligence could be a problem but afaik people are still mostly making toys and i'm not aware of general cognition AIs being worked on.

being used for surveilence tech yeah, isn't that basically what the EU meme ban and chatsecure are on about?

@icedquinn

Your use of the word "cognition" implies your concern is "AI turning conscious".. an AI doesnt need to be conscious or have "cognition" to be a threat.

@miklo

@freemo @icedquinn "stupid" AI is still danger tool because almost always is powered/trained by big data, that is available only for big players. People, small business doesn't have access to data so cannot build competitive AI tools. So even "stupid" AI is big factor of increased market and social inequalities.

@freemo @icedquinn because of that all we should support every sort of opensource AI project where either code and data are fully available.

@freemo sure but if they can’t perform goal seeking and self modifying behaviors they can’t become the oft troped super intelligence.

What we have now are dangerous for very boring traditional reasons.

@miklo

@icedquinn

I wouldnt say thats entierly true, though we havent reached the point where it is like the movies yet.

Often we can set rather simple goals for AI and it may have consequences as a result of its optimization that are unique to AI and not in line with our intended goals.

One example would be that AI might identify that most minorities have a poor credit rating. Therefore it may assume minorities are untrustworthy recipients of loans when they do not have any credit history and explicit deny them loans based on their race. While that may be in line with the goal of "maximize selecting people who are likely to pay back loans" it has unintended effects that is not in line with our goals which implies some sense of racial neutrality.

@miklo

@freemo >AI can't control humans
What a weird myth. This is already literally happening. YouTube, facebook, etc. are already controlling much of people's lives. And certainly those algorithms *have* the ability to be extremely manipulative, when it comes to things like political opinions or views on certain events and people.
It even gets worse with things like Tinder and other dating platforms, which have control about relationships.

Also the "misaligned goals" is something which I can't agree with. AI is designed for specific purposes, if it fails it gets discarded. Its goals are usually predefined and it is measured in how well it accomplishes them. The issue isn't that its goals are "misaligned", but that they are very well aligned towards something bad.

I think the biggest danger with AI is not what particular abilities it may, or may not, develop, but that people will not give a shit about it. Computers, if used at all, should be a tool, not some extension of your life and what AI is usually seeking to accomplish is not creating better tools, but to bind people to their devices.

But I think the chart addresses some common "movie" misconceptions, although I literally do not believe in computer consciousness.

@servant_of_the_anime_avatars

> Also the "misaligned goals" is something which I can't agree with. AI is designed for specific purposes, if it fails it gets discarded. Its goals are usually predefined and it is measured in how well it accomplishes them. The issue isn't that its goals are "misaligned", but that they are very well aligned towards something bad.

The problem is you assume the AI can be discarded. There is a fundemental problem in AI theory called "the stopbutton problem". The problem here is that any sufficiently advanced AI would be inherently unable to be simply discarded. So once it is turned on and its had time to evolve before we realize its goals are misaligned with our intent (because we didnt sufficiently define the goal or edge cases) it may be too late as the AI would prevent any attempts at discarding it.

Here is a good video explaining the issue:

youtu.be/3TYT1QfdfsM

@freemo Just pull the plug.

I am not even kidding, unless you give an AI the ability to physically alter the world around it in a way which might enable it to stop you (this is a very bad idea and I highly doubt we are going to see that any time soon).
In fact, just don't give it root access.

I don't believe any of these "AI goes rogue" memes from scifi are even real. This is clearly not what anybody should be concerned about when right now we have AI which is doing its job very well, but that job is highly unnerving.

@servant_of_the_anime_avatars Did you watch the video? Pulling the plug is the same as the stopbutton problem, and that doesnt work with sufficiently advanced AI.

Presuming the AI is significantly smarter than you then it would also be very good at manipulating you and other people. So the AI would simply create a situation where you would not be able to pull the plug, there are countless ways this may happen. The simplest is it may just hack a computer somewhere else in the world and trasnfer itself to it before you realize there is a problem. Another may be that it hides its true intent from you so you dont feel compelled to shut it down and by the time you realize it has done this its too late and even if you shut it down the damage is already done.

The stopbutton problem has been discussed at length, and no you cant just say "well just pull the plug".

@freemo >The simplest is it may just hack a computer somewhere else in the world and trasnfer itself to it before you realize there is a problem
That is like worrying about what would happen if Aliens were to invade your capital through space magic, while your country is involved in an international conflict.

The AI which *could* "just hack another computer" is not real and it will not be real for a long time. AI which is manipulating your feelings, your daily life and your relationships is *very real* and you can just pull the plug on them.

I really do not like the scifi AI mysticism, AI is very real and it is not magic, it is just software doing what people tell it to do. A neural network is LITERALLY just a bit of linear algebra woth non-linear activator functions in between. It won't hack anything, but it might ruin your life.

@servant_of_the_anime_avatars

> The AI which *could* "just hack another computer" is not real and it will not be real for a long time. AI which is manipulating your feelings, your daily life and your relationships is *very real* and you can just pull the plug on them.

Yes AIs that can take over the world and go rouge are not a problem yet.. but that is the problem which is on the horizon being discussed. For now, yes, you can just pull the plug, but that may not last nearly as long as you think and wont necessarily require a super intelligence to get us there.

We can look at the use of AI in facebook to "fact check" things as a prime example of it.. Sure in theory you **could** just pull the button, but generally the public outcry and demand for the fact checking is exactly why its there in the first place. You may quickly find that the AI's goals are misaligned and by having an AI control the censorship of information becomes destructive but that same AI is self reinforcing as it also can manipulate the public into thinking it is a necessity. So even as it becomes increasingly destructive its own influence on people and the resulting social pressures prevent anyone from pulling the plug. The original goals effectively become misaligned and yet the AI isnt simply discarded.

Its kind of annoying you still havent watched the video, they do explain how the stopbutton problem is an issue even in AI that isnt super intelligent.

@freemo But that is literally my point. Right now people do not care in the least about AI controlling their lives. And as you pointed out *that is the real stop button problem*, of course there is nothing about facebooks algorithms which has any ability to stop someone with access to the right server rooms, a pair of side cutters and an axe, plus a couple of minutes to take it down.

But on the question of misaligned goals, the algorithms have basically one goal and that is keeping your attention. In that, if their goals were misaligned, they are automatically less functional and less of a threat.

But right now a neural network is LITERALLY just matrix multiplication and activation functions. It is not magic, it won't hack you and it will fo what it is told.
@servant_of_the_anime_avatars @freemo oh even simpler my acute astute colleague:

a rough unintelligent "AI" has been for years doing things like giving insulin or morphine to patients

it's not really complicated to imagine someone builds a burning laser turret that burns any human in sight and can defend itself against threats like airplanes and bombs

it is science fiction now, but imagine a nanomachine that can make other nanomachines out of air and they replicate so quickly that air becomes a sludge of nanomachines and all life is detroyed and the ball is surrounded by an unbreathable goo.
@wikifarms @freemo >it's not really complicated to imagine someone builds a burning laser turret that burns any human in sight and can defend itself against threats like airplanes and bombs
Pull the plug.

> a nanomachine that can make other nanomachines out of air and they replicate so quickly that air becomes a sludge of nanomachines and all life is detroyed and the ball is surrounded by an unbreathable goo.
it is science fiction now, but imagine green glowing martians with an appetite for humans and epic laser guns!
@servant_of_the_anime_avatars @freemo oh my intelligent one

>pull the plug

say the turret can charge itself from the sun

>>comparing a machine with martians and laser guns

while "martians" would have to overcome an extremely long distance etc. and are unlikely to happen why can't some tiny machine one day in the future be able to rearrange the various substances in the air to create a copy of itself and so on and so forth?
@wikifarms @freemo >say the turret can charge itself from the sun
Then its maximum energy output per unit time can't on average, be more than the sun shining on its solar panels.
Also, read the greeks, obviously the answer is the same as if you are fighting medusa.

>why can't some tiny machine one day in the future be able to rearrange the various substances in the air to create a copy of itself and so on and so forth?
Why can't earth explode tomorrow??!?
Worrying about potential doomsday scenarios makes no sense, especially if they are so far fetched and so incoherent with everything we know about chemistry as your example.
Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.