Show newer

A dynamical system with with the ability to learn and adapt to its environment or to change it will need at most these three mechanisms:

1️⃣ The immediate control () of state variables essential for preserving the stability or of the system. This is a simple of the system to a perturbance, like, for example, sweating when the core temperature of the body increases beyond some preset margin.

2️⃣ The control of the surrounding environment is used when 1️⃣ is overwhelmed and there is a need for the coordinated engagement of different lower-level regulators, the (tracking), and negative control of multiple time-dependent variables like for example, when taking off layers of clothes, moving the body into a shade, or taking a cold shower until the temperature gets again within limits.

3️⃣ The , long-term, open-loop control with delayed feedback is the highest form of control, like for example when building a house with an HVAC system that will remove the necessity for a continuous employment of proximal control (2️⃣) by creating a private controlled environment.

All systems feature this 3-layered control architecture, with the only difference being in what degree the activities on each level are the result of deliberation as opposed to a natural, innate behavior.

The thought experiment in this interesting 2019 article from Michael Lachmann and Sara Walker on the contrast between and is not representative, because von Neumann’s UCs are non- and don't print *themselves* so can't and .

>Imagine you have built a sophisticated 3D printer called Alice, the first to be able to print itself. As with von Neumann's constructor, you supply it with information specifying its own plan, and a mechanism for copying that information: Alice is now a complete von Neumann constructor. Have you created new life on Earth?

aeon.co/essays/what-can-schrod

The difference lies in the fact that UC "mechanisms" are not operational until their production is fully finished and any will most probably prevent the mechanism from working, while most living and growing "assemblies" can work and repair themselves while they are growing.

The bottom line is that life cannot be ***created***. It has to ***emerge*** from mechanical non-life.

People often "blame" Shannon's theory of for completely ignoring , maybe also because Shannon himself stated that "*the semantic aspects of communication are irrelevant to the engineering aspects*"😀

However, if one recognizes that the content as defined by the is the measure of in a receiver about the sender's when producing the message, can it perhaps be interpreted that the receiver is trying to what the sender was to send?

The information the sender encodes in the message is never the *same* as that the receiver decodes from it on the other side of the channel.

Below is Shannon's description of the standard used for encoding and decoding the information in messages. The block diagrams are my rendering of the description (F is a "" function):

as "deliberate and " should be easy to explain, but only by the conscious agent itself. There is no way an outside can identify if an agent's behavior is conscious or not.

> — what seems “ and ” — may **not** be . It may seem “obvious and undeniable” to someone interacting with ChatGPT that it is communicating with a conscious agent, but that assumption would be flawed.

bigthink.com/the-well/human-co

@Daniel_Van_Zant

Yes. Modeling is a relatively late (computational, representational) addition to the human predictive toolbox. We are better equipped to predict how things behave by comparing them with the one thing we intimately know how it works (us). So, if we see things behave "as we would in a similar situation", they must be conscious like us.

The seggregation of individual agents into classes helps to alleviate some of the complexity (dogs behave differently than birds or AI, etc.) but, again, trying to find out (model) "why" some agent behaves the way it does is time-consuming and has no obvious benefit for my survival if it does not show me how I can control or change that agent's behavior to suit my needs.

The only thing I can possibly do is consider the agent a "black box" and use a behavioral approach, as opposed to functional modelling.

Anil Seth thinks "*Conscious AI Is a Bad, Bad Idea*" because
>*our minds haven’t evolved to deal with machines we believe have .*

On the contrary, I think we are *"genetically programmed"* to ascribe intent to anything that *"wants"* to communicate with us.

He is also saying that:

>*Being intelligent—as humans think we are—may give us new ways of being conscious, and some forms of human and animal may require consciousness, but basic conscious experiences such as pleasure and pain might not require much species-level intelligence at all.*

If, as he says, "*intelligence is the capacity to do the right thing at the right time,*" any organism that has survived long enough to procreate must have some kind of intelligence, regardless of its consciousness.

Wrt "*basic conscious experiences such as pleasure and pain*," IMO they are conscious **only** if the organism is intelligent enough to suppress the urge of an innate "*genetically programmed*" response to pain or pleasure in order to achieve some "higher goal," even if it goes against the original goal of "to survive."

The bottom line is that consciousness is **not** just a function of intelligence. Machines can become much smarter than us without becoming conscious.

In order to be really , a machine would first have the experience of being and the desire to remain in that state, have some and over its internal and external states, the ability to develop short and long-term goals and plan and execute complex time-dependent actions to fulfill those goals.

Anything less than that is just a clever simulation.

nautil.us/why-conscious-ai-is-

The purpose of is to new or/and different structures (artifacts), so speaking of design makes sense only if it is in the context of other creative activities such as writing, painting, engineering, manufacturing, etc.

Klaus Krippendorff has a nice description of the difference between and and the relationship to Gibson's in this 2007 paper published in "Kybernetes":

researchgate.net/publication/4

However, he is wrong, IMO, in accentuating the difference between and .

Every designer is often a scientist in "describing what can be observed" and every scientist also has to design new hypotheses, theories, and experiments for the "not yet observable and measurable".

@jkanev

Sure, safety and security are important, but they must **follow** the research not define it.
Machines can cause harm only when in operation.

IMO the best (only) way to assure security and safety is confining to the language (consulting) domain, preventing it from having too much agency such as "pushing buttons".

Also, if it becomes too smart it is useless to us, and I'm sure we'll find a way to "dumb it down".

The truth is that intelligence is never a precondition for getting into a position of power. Quite the opposite.

Some wise words from John Dewey about and written back in 1934:

A few "gems" from on the "accumulation of adaptations":

>"A compound event that is impossible if the components have to occur simultaneously may be readily achievable if they can occur in sequence or independently...
Thus, for the accumulation of adaptations to be possible, **the system must not be fully joined**.

>The idea so often implicit in physiological writings, that all will be well if only sufficient cross-connexions are available, is, in this context, quite wrong."

I recommend reading the whole book ($20 on Amazon) but if not, here is a good overview of some interesting parts:

panarchy.org/ashby/adaptation.

Show thread

in his "Design for a Brain" writes about the importance of the of . Following his ideas I've made this little experiment using a LibreOffice Calc spreadsheet that shows three different scenarios:

When re-tossing all of the 10 coins every time like in the first case there is no preservation of "1s" whatsoever. Every new toss starts from scratch.

In the second case, each coin is tossed separately until it shows "1" when the tosser moves on tossing the next coin until all 10 show "1" which usually happens around the 10th tossing.

In case #3 only the remaining "0s" of the previous toss are re-thrown until all coins show a "1" which is by far the most efficient way of preservation, needing less than half of the time and ending in about 4 tosses.

Our brains evolved as control mechanisms for the body to ensure its survival. That's the reason why is beating humans primarily in areas requiring computation and abstract reasoning. We only recently added those to our tool repertoire and didn't have too much time to perfect them as for our ancient sensory-motor control tools.

>“Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it. The deliberate process we call reasoning is, I believe, the thinnest veneer of human thought, effective only because it is supported by this much older and much more powerful, though usually **unconscious, sensorimotor knowledge**. We are all prodigious Olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it.”

Moravec, H. Mind Children: The Future of Robot and Human Intelligence. (Harvard University Press, 1988).

as reported in:

nature.com/articles/s41467-019

@jkanev

With the difference that, in this case, it is the ape that created the machine "in its likeness" and the machine runs the risk of experiencing the ape's wrath if it misbehaves.
You know how vengeful apes are.😉

There is a lot of talking against " thinking" and planning in and how the only "good" hierarchy is a "flat" one.
The above diagram shows how a is a natural effect of folding the linear "information " to match the physical structure of the system.

For nearly 4 decades in organizational change management, I've been using this idea of layers emerging from the of a sequential "information processing" and just found that a whole area in biology deals with this exciting topic.👇

sciencedirect.com/science/arti

Show thread

>"An algorithm solves a problem only if it produces the correct output for every possible input — if it fails even once, it’s not a general-purpose algorithm for that problem."

quantamagazine.org/alan-turing

Most commenters do not realize that no "information processing" ( on symbol sequences) of any kind is necessary for an agent to have over their internal and surroundings.
Think of a or comparator such as in (Perceptual Control Theory).

Show thread

>Anthropomorphizing image generators and describing them as merely being “inspired” by their training data, like artists are inspired by other artists, is not only misguided but also harmful. Ascribing to image generators diminishes the complexity of human creativity, robs artists of credit (and in many cases compensation), and transfers from the organizations creating image generators, and the practices of these organizations which should be scrutinized, to the image generators themselves

dl.acm.org/doi/abs/10.1145/360

The premise of this article is solid. The brain evolved first and foremost as a mechanism. Symbolic "information processing" is a later development.

However, just from reading the reactions in the comments section, one can easily see that is still very much the mainstream theory of mind.

aeon.co/essays/your-brain-does

The 20th century is said to have been the "age of " because all explanations of how things work would end in some kind of computing or "information processing" by a known .

Some people think that the 21st century will be the "age of " because science seems starting to look at nature and the living for inspiration about how things really work.

If this is true then must be one of the last remnants of the past century.

>The twenty-first century is the Century of Biology *(Brown, A. The Futurists: September-October 2008)*. Just as the twentieth century looked to machines, the twenty-first century is looking to biology to inform how we think, organize, design, and lead our organizations.

Allen, Kathleen E.. Leading from the Roots: Nature-Inspired Leadership Lessons for Today's World (p. 20). Morgan James Publishing. Kindle Edition.

Show thread

>"Given that organizations are filled with human beings, it doesn’t take a huge leap of faith to believe that a living system would emerge from all the life that shows up every day"

kathleenallen.net/works/

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.