Anil Seth thinks "*Conscious AI Is a Bad, Bad Idea*" because
>*our minds haven’t evolved to deal with machines we believe have #consciousness.*
On the contrary, I think we are *"genetically programmed"* to ascribe intent to anything that *"wants"* to communicate with us.
He is also saying that:
>*Being intelligent—as humans think we are—may give us new ways of being conscious, and some forms of human and animal #intelligence may require consciousness, but basic conscious experiences such as pleasure and pain might not require much species-level intelligence at all.*
If, as he says, "*intelligence is the capacity to do the right thing at the right time,*" any organism that has survived long enough to procreate must have some kind of intelligence, regardless of its consciousness.
Wrt "*basic conscious experiences such as pleasure and pain*," IMO they are conscious **only** if the organism is intelligent enough to suppress the urge of an innate "*genetically programmed*" response to pain or pleasure in order to achieve some "higher goal," even if it goes against the original goal of "to survive."
The bottom line is that consciousness is **not** just a function of intelligence. Machines can become much smarter than us without becoming conscious.
In order to be really #conscious, a machine would first have the experience of being #alive and the desire to remain in that state, have some #agency and #control over its internal and external states, the ability to develop short and long-term goals and plan and execute complex time-dependent actions to fulfill those goals.
Anything less than that is just a clever simulation.
https://nautil.us/why-conscious-ai-is-a-bad-bad-idea-302937/
Yes. Modeling is a relatively late (computational, representational) addition to the human predictive toolbox. We are better equipped to predict how things behave by comparing them with the one thing we intimately know how it works (us). So, if we see things behave "as we would in a similar situation", they must be conscious like us.
The seggregation of individual agents into classes helps to alleviate some of the complexity (dogs behave differently than birds or AI, etc.) but, again, trying to find out (model) "why" some agent behaves the way it does is time-consuming and has no obvious benefit for my survival if it does not show me how I can control or change that agent's behavior to suit my needs.
The only thing I can possibly do is consider the agent a "black box" and use a behavioral approach, as opposed to functional modelling.