Follow

Anil Seth thinks "*Conscious AI Is a Bad, Bad Idea*" because
>*our minds haven’t evolved to deal with machines we believe have .*

On the contrary, I think we are *"genetically programmed"* to ascribe intent to anything that *"wants"* to communicate with us.

He is also saying that:

>*Being intelligent—as humans think we are—may give us new ways of being conscious, and some forms of human and animal may require consciousness, but basic conscious experiences such as pleasure and pain might not require much species-level intelligence at all.*

If, as he says, "*intelligence is the capacity to do the right thing at the right time,*" any organism that has survived long enough to procreate must have some kind of intelligence, regardless of its consciousness.

Wrt "*basic conscious experiences such as pleasure and pain*," IMO they are conscious **only** if the organism is intelligent enough to suppress the urge of an innate "*genetically programmed*" response to pain or pleasure in order to achieve some "higher goal," even if it goes against the original goal of "to survive."

The bottom line is that consciousness is **not** just a function of intelligence. Machines can become much smarter than us without becoming conscious.

In order to be really , a machine would first have the experience of being and the desire to remain in that state, have some and over its internal and external states, the ability to develop short and long-term goals and plan and execute complex time-dependent actions to fulfill those goals.

Anything less than that is just a clever simulation.

nautil.us/why-conscious-ai-is-

@Kihbernetics Pages 23 and 24 of this paper: mural.maynoothuniversity.ie/10 have a thought experiment that is relevant. It posits that one of the reasons we view things as conscious is because it is the easiest way to model their existence. When something has enough moving parts that we can't predict its behavior by having an internal model of the system, then it is better to view it as a unified agent. In this way, even if AIs aren't conscious, it may be better for us as humans to view them as such.

@Daniel_Van_Zant

Yes. Modeling is a relatively late (computational, representational) addition to the human predictive toolbox. We are better equipped to predict how things behave by comparing them with the one thing we intimately know how it works (us). So, if we see things behave "as we would in a similar situation", they must be conscious like us.

The seggregation of individual agents into classes helps to alleviate some of the complexity (dogs behave differently than birds or AI, etc.) but, again, trying to find out (model) "why" some agent behaves the way it does is time-consuming and has no obvious benefit for my survival if it does not show me how I can control or change that agent's behavior to suit my needs.

The only thing I can possibly do is consider the agent a "black box" and use a behavioral approach, as opposed to functional modelling.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.