A dynamical system with #memory with the ability to learn and adapt to its environment or to change it will need at most these three #control mechanisms:
1️⃣ The #internal immediate control (#regulation) of state variables essential for preserving the stability or #homeostasis of the system. This is a simple #reaction of the system to a perturbance, like, for example, sweating when the core temperature of the body increases beyond some preset margin.
2️⃣ The #proximal control of the surrounding environment is used when 1️⃣ is overwhelmed and there is a need for the coordinated engagement of different lower-level regulators, the #measurement (tracking), and negative #feedback control of multiple time-dependent variables like for example, when taking off layers of clothes, moving the body into a shade, or taking a cold shower until the temperature gets again within limits.
3️⃣ The #distal, long-term, open-loop control with delayed feedback is the highest form of control, like for example when building a house with an HVAC system that will remove the necessity for a continuous employment of proximal control (2️⃣) by creating a private controlled environment.
All #living systems feature this 3-layered control architecture, with the only difference being in what degree the activities on each level are the result of #conscious deliberation as opposed to a natural, innate behavior.
The thought experiment in this interesting 2019 article from Michael Lachmann and Sara Walker on the contrast between #life and #living is not representative, because von Neumann’s UCs are non-#autopoietic and don't print *themselves* so can't #grow and #evolve.
>Imagine you have built a sophisticated 3D printer called Alice, the first to be able to print itself. As with von Neumann's constructor, you supply it with information specifying its own plan, and a mechanism for copying that information: Alice is now a complete von Neumann constructor. Have you created new life on Earth?
https://aeon.co/essays/what-can-schrodingers-cat-say-about-3d-printers-on-mars
The difference lies in the fact that UC "mechanisms" are not operational until their production is fully finished and any #error will most probably prevent the mechanism from working, while most living and growing "assemblies" can work and repair themselves while they are growing.
The bottom line is that life cannot be ***created***. It has to ***emerge*** from mechanical non-life.
People often "blame" Shannon's theory of #communication for completely ignoring #meaning, maybe also because Shannon himself stated that "*the semantic aspects of communication are irrelevant to the engineering aspects*"😀
However, if one recognizes that the #information content as defined by the #entropy is the measure of #uncertainty in a receiver about the sender's #state when producing the message, can it perhaps be interpreted that the receiver is trying to #understand what the sender was #meaning to send?
The information the sender encodes in the message is never the *same* as that the receiver decodes from it on the other side of the channel.
Below is Shannon's description of the standard #transducer used for encoding and decoding the information in messages. The block diagrams are my rendering of the description (F is a "#memory" function):
#Consciousness as "deliberate #thinking and #cognition" should be easy to explain, but only by the conscious agent itself. There is no way an outside #oserver can identify if an agent's behavior is conscious or not.
>#Intuition — what seems “#obvious and #undeniable” — may **not** be #trustworthy. It may seem “obvious and undeniable” to someone interacting with ChatGPT that it is communicating with a conscious agent, but that assumption would be flawed.
https://bigthink.com/the-well/human-consciousness-womb-after-birth/
Yes. Modeling is a relatively late (computational, representational) addition to the human predictive toolbox. We are better equipped to predict how things behave by comparing them with the one thing we intimately know how it works (us). So, if we see things behave "as we would in a similar situation", they must be conscious like us.
The seggregation of individual agents into classes helps to alleviate some of the complexity (dogs behave differently than birds or AI, etc.) but, again, trying to find out (model) "why" some agent behaves the way it does is time-consuming and has no obvious benefit for my survival if it does not show me how I can control or change that agent's behavior to suit my needs.
The only thing I can possibly do is consider the agent a "black box" and use a behavioral approach, as opposed to functional modelling.
Anil Seth thinks "*Conscious AI Is a Bad, Bad Idea*" because
>*our minds haven’t evolved to deal with machines we believe have #consciousness.*
On the contrary, I think we are *"genetically programmed"* to ascribe intent to anything that *"wants"* to communicate with us.
He is also saying that:
>*Being intelligent—as humans think we are—may give us new ways of being conscious, and some forms of human and animal #intelligence may require consciousness, but basic conscious experiences such as pleasure and pain might not require much species-level intelligence at all.*
If, as he says, "*intelligence is the capacity to do the right thing at the right time,*" any organism that has survived long enough to procreate must have some kind of intelligence, regardless of its consciousness.
Wrt "*basic conscious experiences such as pleasure and pain*," IMO they are conscious **only** if the organism is intelligent enough to suppress the urge of an innate "*genetically programmed*" response to pain or pleasure in order to achieve some "higher goal," even if it goes against the original goal of "to survive."
The bottom line is that consciousness is **not** just a function of intelligence. Machines can become much smarter than us without becoming conscious.
In order to be really #conscious, a machine would first have the experience of being #alive and the desire to remain in that state, have some #agency and #control over its internal and external states, the ability to develop short and long-term goals and plan and execute complex time-dependent actions to fulfill those goals.
Anything less than that is just a clever simulation.
https://nautil.us/why-conscious-ai-is-a-bad-bad-idea-302937/
The purpose of #design is to #create new or/and different #artificial structures (artifacts), so speaking of design makes sense only if it is in the context of other creative #production activities such as writing, painting, engineering, manufacturing, etc.
Klaus Krippendorff has a nice description of the difference between #object and #artifact and the relationship to Gibson's #affordances in this 2007 paper published in "Kybernetes":
https://researchgate.net/publication/45597493_The_Cybernetics_of_Design_and_the_Design_of_Cybernetic
However, he is wrong, IMO, in accentuating the difference between #scientists and #designers.
Every designer is often a scientist in "describing what can be observed" and every scientist also has to design new hypotheses, theories, and experiments for the "not yet observable and measurable".
Sure, safety and security are important, but they must **follow** the research not define it.
Machines can cause harm only when in operation.
IMO the best (only) way to assure security and safety is confining #AI to the language (consulting) domain, preventing it from having too much agency such as "pushing buttons".
Also, if it becomes too smart it is useless to us, and I'm sure we'll find a way to "dumb it down".
The truth is that intelligence is never a precondition for getting into a position of power. Quite the opposite.
Some wise words from John Dewey about #Intelligence and #Power written back in 1934:
A few "gems" from #WR_Ashby on the "accumulation of adaptations":
>"A compound event that is impossible if the components have to occur simultaneously may be readily achievable if they can occur in sequence or independently...
Thus, for the accumulation of adaptations to be possible, **the system must not be fully joined**.
>The idea so often implicit in physiological writings, that all will be well if only sufficient cross-connexions are available, is, in this context, quite wrong."
I recommend reading the whole book ($20 on Amazon) but if not, here is a good overview of some interesting parts:
#WR_Ashby in his "Design for a Brain" writes about the importance of the #preservation of #adaptations. Following his ideas I've made this little experiment using a LibreOffice Calc spreadsheet that shows three different scenarios:
When re-tossing all of the 10 coins every time like in the first case there is no preservation of "1s" whatsoever. Every new toss starts from scratch.
In the second case, each coin is tossed separately until it shows "1" when the tosser moves on tossing the next coin until all 10 show "1" which usually happens around the 10th tossing.
In case #3 only the remaining "0s" of the previous toss are re-thrown until all coins show a "1" which is by far the most efficient way of preservation, needing less than half of the time and ending in about 4 tosses.
Our brains evolved as control mechanisms for the body to ensure its survival. That's the reason why #AI is beating humans primarily in areas requiring computation and abstract reasoning. We only recently added those to our tool repertoire and didn't have too much time to perfect them as for our ancient sensory-motor control tools.
>“Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it. The deliberate process we call reasoning is, I believe, the thinnest veneer of human thought, effective only because it is supported by this much older and much more powerful, though usually **unconscious, sensorimotor knowledge**. We are all prodigious Olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it.”
Moravec, H. Mind Children: The Future of Robot and Human Intelligence. (Harvard University Press, 1988).
as reported in:
With the difference that, in this case, it is the ape that created the machine "in its likeness" and the machine runs the risk of experiencing the ape's wrath if it misbehaves.
You know how vengeful apes are.😉
There is a lot of talking against "#linear thinking" and planning in #complexity and how the only "good" hierarchy is a "flat" one.
The above diagram shows how a #hierarchy is a natural effect of folding the linear "information #flow" to match the physical structure of the system.
For nearly 4 decades in organizational change management, I've been using this idea of #hierarchy layers emerging from the #folding of a sequential "information processing" #flow and just found that a whole area in biology deals with this exciting topic.👇
https://www.sciencedirect.com/science/article/pii/S1877050922017811
>"An algorithm solves a problem only if it produces the correct output for every possible input — if it fails even once, it’s not a general-purpose algorithm for that problem."
https://www.quantamagazine.org/alan-turing-and-the-power-of-negative-thinking-20230905/
Most commenters do not realize that no "information processing" (#computation on symbol sequences) of any kind is necessary for an agent to have #control over their internal #states and surroundings.
Think of a #homeostat or comparator such as in #PCT (Perceptual Control Theory).
>Anthropomorphizing image generators and describing them as merely being “inspired” by their training data, like artists are inspired by other artists, is not only misguided but also harmful. Ascribing #agency to image generators diminishes the complexity of human creativity, robs artists of credit (and in many cases compensation), and transfers #accountability from the organizations creating image generators, and the practices of these organizations which should be scrutinized, to the image generators themselves
The premise of this article is solid. The brain evolved first and foremost as a #control mechanism. Symbolic "information processing" is a later development.
However, just from reading the reactions in the comments section, one can easily see that #computationalism is still very much the mainstream theory of mind.
https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer
The 20th century is said to have been the "age of #machines" because all explanations of how things work would end in some kind of computing or "information processing" by a known #mechanism.
Some people think that the 21st century will be the "age of #biology" because science seems starting to look at nature and the living #organism for inspiration about how things really work.
If this is true then #Computationalism must be one of the last remnants of the past century.
>The twenty-first century is the Century of Biology *(Brown, A. The Futurists: September-October 2008)*. Just as the twentieth century looked to machines, the twenty-first century is looking to biology to inform how we think, organize, design, and lead our organizations.
Allen, Kathleen E.. Leading from the Roots: Nature-Inspired Leadership Lessons for Today's World (p. 20). Morgan James Publishing. Kindle Edition.
>"Given that organizations are filled with human beings, it doesn’t take a huge leap of faith to believe that a living system would emerge from all the life that shows up every day"
#Kihbernetics is the study of #Complex #Dynamical #Systems with #Memory which is quite different from other #SystemsThinking approaches. Kihbernetic theory and principles are derived primarily from these three sources:
1️⃣ #CE_Shannon's theory of #Information and his description of a #Transducer,
2️⃣ #WR_Ashby's #Cybernetics and his concept of #Transformation, and
3️⃣ #HR_Maturana's theory of #Autopoiesis and the resulting #Constructivism
Although equally applicable to any dynamical system with memory (mechanisms, organisms, or organizations) the Kihbernetic worldview originated from my helping navigate organizations through times of #change.