>I trusted a lot today. I trusted my phone to wake me on time. I trusted Uber to arrange a taxi for me, and the driver to get me to the airport safely. I trusted thousands of other drivers on the road not to ram my car on the way. At the airport, I trusted ticket agents and maintenance engineers and everyone else who keeps airlines operating. And the pilot of the plane I flew in. And thousands of other people at the airport and on the plane, any of which could have attacked me. And all the people that prepared and served my breakfast, and the entire food supply chain—any of them could have poisoned me. When I landed here, I trusted thousands more people: at the airport, on the road, in this building, in this room. And that was all before 10:30 this morning.
https://www.schneier.com/blog/archives/2023/12/ai-and-trust.html
"A public model is a model built by the public for the public. It requires political accountability, not just market accountability. This means openness and transparency paired with a responsiveness to public demands. It should also be available for anyone to build on top of. This means universal access. And a foundation for a free market in #AI innovations. This would be a counter-balance to corporate-owned AI." #trust https://www.schneier.com/blog/archives/2023/12/ai-and-trust.html
Questo è il Soluzionismo: L’Influenza di Zuckerberg e Musk nell’Economia Digitale Mondiale
Un nuovo studio del #sociologo economico Oliver #Nachtwey dell’Università di Basilea e del suo collega Timo #Seidl dell’Università di #Vienna esamina l’influenza delle idee di Mark #Zuckerberg e Elon #Musk sulla moderna economia digitale.
I #ricercatori hanno analizzato #discorsi, #libri e #articoli provenienti dalla #SiliconValley, rivelando un nuovo spirito del #capitalismo digitale.
Condividi questo post se hai trovato la news interessante.
#redhotcyber #online #it #web #ai #hacking #privacy #cybersecurity #cybercrime #intelligence #intelligenzaartificiale #informationsecurity #ethicalhacking #dataprotection #cybersecurityawareness #cybersecuritytraining #cybersecuritynews #infosecurity
1️⃣ Kihbernetic #System with
2️⃣ fundamental #Processes: a recursive #Autopoietic self-production for growth and learning, and a linear #Allopoietic production of "other things", such as behavior and waste, distributed in
3️⃣ Control #Levels, of #Regulation, immersed in, and dealing with things in the system's environment, #Control for managing the workload of different regulators, and #Guidance to provide long-term goals and preserve the identity of the system, all using
4️⃣ #Variables: sensory #Input of data and other resources, motor #Output of behavior, #Information as the difference that will make a difference in the subsequent (updated) #Knowledge state, all interconnecting
5️⃣ #Functions: the #Control-ed #Reaction to external stimuli, the #Perception of sensory states, the #Prediction of the expected outcome of past behavior, and the repeated #Integration of new information into an updated knowledge state.
>"#N_Wiener’s preoccupation with mathematics, echoed in #WR_Ashby’s commitment to mechanical models and #Hv_Foerster’s conception of formal descriptions, largely excluded social phenomena in which cybernetics was practiced."
#K_Krippendorff's last paper
All #learning must be open-ended. The learning agent (the #observer) must have the #autonomy to set its own learning goals as well as plan and execute a #sequence of #exploration activities to achieve these goals.
One can never learn *all existing data* but rather refine their understanding of the data that is available to them. As true for human intelligence, you can either have "deep and narrow" specialized #AI agents or "average and broad" #AGI. You can't have both in the same entity. Time and #memory "limitation" are the main inspirations for #diversity and #cooperation between learning agents.
People should have figured it out by now that the #distribution of processing power, not the #centralization in gargantuan data and control centers is the right thing to do.
Stop working on LLMs (Large Language Models) and start working on PCAs (Personal Customizable Assistants).
The current craze over *social media ruining democracy* and *AI posing an existential threat to humanity* stems from the fact that most people don't understand that the only thing #technology is able to do is *amplify* their own capacity to do good or bad.
The Internet and AI are communication and intelligence #amplifiers, the same way motors and servo mechanisms are amplifiers of our muscle power.
#Consciousness, like #awareness is not a *thing*. It is a #state or property of the part of an entity's #cognition (which *is* a thing) that the entity may *be* conscious or aware of ... or *not*.
Furthermore, consciousness is a #binary proposition. One can be either conscious of something or not. You can't be a *little bit conscious* the same way you can't be a *little bit alive*.
I read a sample of Robert M. Sapolsky's new book *Determined: A Science of Life without Free Will* on Amazon, and I really don't see why some people find it "revolutionary". I find it full of half-baked contradictory claims that don't hold water even under quick superficial scrutiny like this.
Brains don't generate behaviors. They #produce motor responses to sensory stimuli that an outside observer then interprets as the behavior of the observed individual in their immediate environment. The observer can also stick electrodes in the brain of the individual and then correlate the observed behavior with the measurements performed on some of the neurons and then conclude that those firings have caused the behavior. However, even if it was possible to replicate the exact sequence of the observed firings of all the neurons, the observed behavior would be different if the "response" of the environment was also not exactly the same as during the measurement.
Determinism alone doesn't "cause" anything even if there are no such things as "causeless causes". The current #state of the system is obviously determined by its previous state and the current sensory inputs, so there are at least two separate "determinisms" in play here all the time, and, as an individual existing in its particular environment, I have at least **some** #control over the unfolding of both, my biology (eating, drinking coffee), and my environment (writing this nonsense)😉.
I wish people who are coming up each day with a new "breakthrough" theory using physics and/or quantum mechanics to explain everything from complexity and life to consciousness and free will, would read first what #M_Polanyi has said about it.
This is from:
http://www.polanyisociety.org/MP-On--the-Modern-Mind-1965-ocr.pdf
#M_Polany in "Life's Irreducible Structure" (1968) points out that using deterministic #machines to explain "the physics of #life" is backward thinking, because machines are devised and built by humans to resemble organisms and to serve the purpose of their design, and can therefore only be a #biological, not a #physical analogy.
>The organism is shown to be, like a machine, a system which works according to two different principles: its structure serves as a boundary condition harnessing the physical-chemical processes by which its organs perform their functions. Thus, this system may be called ***a system under dual control*** (*emphasis mine*). Morphogenesis, the process by which the #structure of living beings develops, can then be likened to the shaping of a machine which will act as a boundary for the laws of inanimate nature.
...
In the machine, our principal interest lay in the effects of the boundary conditions, while in an experimental setting, we are interested in the natural processes controlled by the boundaries."
Or in other words, we are interested either in the *control* #rules of the machine or the physical #laws of *causality* that make the machine work.
Wiener was wrong. There is ***no*** #communication ***in*** either the animal or in the machine, only #control by the application of #constraints to their inner flow of matter and energy.
Communication is established ***between*** animals and/or machines, and, as Shannon correctly recognized, requires an independent communication #channel susceptible to the environmental disturbance called #noise.
In order to be able to communicate animals and machines must share a common #language or cipher #rules used to code their respective messages. Communication is always one-way and does not require feedback. The sender has no control over the message after it is sent through the channel.
A special case of communication is #observation where the communication is established between the observer system and phenomena in its environment not necessarily produced by other systems "languaging".
A dynamical system with #memory with the ability to learn and adapt to its environment or to change it will need at most these three #control mechanisms:
1️⃣ The #internal immediate control (#regulation) of state variables essential for preserving the stability or #homeostasis of the system. This is a simple #reaction of the system to a perturbance, like, for example, sweating when the core temperature of the body increases beyond some preset margin.
2️⃣ The #proximal control of the surrounding environment is used when 1️⃣ is overwhelmed and there is a need for the coordinated engagement of different lower-level regulators, the #measurement (tracking), and negative #feedback control of multiple time-dependent variables like for example, when taking off layers of clothes, moving the body into a shade, or taking a cold shower until the temperature gets again within limits.
3️⃣ The #distal, long-term, open-loop control with delayed feedback is the highest form of control, like for example when building a house with an HVAC system that will remove the necessity for a continuous employment of proximal control (2️⃣) by creating a private controlled environment.
All #living systems feature this 3-layered control architecture, with the only difference being in what degree the activities on each level are the result of #conscious deliberation as opposed to a natural, innate behavior.
The thought experiment in this interesting 2019 article from Michael Lachmann and Sara Walker on the contrast between #life and #living is not representative, because von Neumann’s UCs are non-#autopoietic and don't print *themselves* so can't #grow and #evolve.
>Imagine you have built a sophisticated 3D printer called Alice, the first to be able to print itself. As with von Neumann's constructor, you supply it with information specifying its own plan, and a mechanism for copying that information: Alice is now a complete von Neumann constructor. Have you created new life on Earth?
https://aeon.co/essays/what-can-schrodingers-cat-say-about-3d-printers-on-mars
The difference lies in the fact that UC "mechanisms" are not operational until their production is fully finished and any #error will most probably prevent the mechanism from working, while most living and growing "assemblies" can work and repair themselves while they are growing.
The bottom line is that life cannot be ***created***. It has to ***emerge*** from mechanical non-life.
People often "blame" Shannon's theory of #communication for completely ignoring #meaning, maybe also because Shannon himself stated that "*the semantic aspects of communication are irrelevant to the engineering aspects*"😀
However, if one recognizes that the #information content as defined by the #entropy is the measure of #uncertainty in a receiver about the sender's #state when producing the message, can it perhaps be interpreted that the receiver is trying to #understand what the sender was #meaning to send?
The information the sender encodes in the message is never the *same* as that the receiver decodes from it on the other side of the channel.
Below is Shannon's description of the standard #transducer used for encoding and decoding the information in messages. The block diagrams are my rendering of the description (F is a "#memory" function):
#Consciousness as "deliberate #thinking and #cognition" should be easy to explain, but only by the conscious agent itself. There is no way an outside #oserver can identify if an agent's behavior is conscious or not.
>#Intuition — what seems “#obvious and #undeniable” — may **not** be #trustworthy. It may seem “obvious and undeniable” to someone interacting with ChatGPT that it is communicating with a conscious agent, but that assumption would be flawed.
https://bigthink.com/the-well/human-consciousness-womb-after-birth/
Anil Seth thinks "*Conscious AI Is a Bad, Bad Idea*" because
>*our minds haven’t evolved to deal with machines we believe have #consciousness.*
On the contrary, I think we are *"genetically programmed"* to ascribe intent to anything that *"wants"* to communicate with us.
He is also saying that:
>*Being intelligent—as humans think we are—may give us new ways of being conscious, and some forms of human and animal #intelligence may require consciousness, but basic conscious experiences such as pleasure and pain might not require much species-level intelligence at all.*
If, as he says, "*intelligence is the capacity to do the right thing at the right time,*" any organism that has survived long enough to procreate must have some kind of intelligence, regardless of its consciousness.
Wrt "*basic conscious experiences such as pleasure and pain*," IMO they are conscious **only** if the organism is intelligent enough to suppress the urge of an innate "*genetically programmed*" response to pain or pleasure in order to achieve some "higher goal," even if it goes against the original goal of "to survive."
The bottom line is that consciousness is **not** just a function of intelligence. Machines can become much smarter than us without becoming conscious.
In order to be really #conscious, a machine would first have the experience of being #alive and the desire to remain in that state, have some #agency and #control over its internal and external states, the ability to develop short and long-term goals and plan and execute complex time-dependent actions to fulfill those goals.
Anything less than that is just a clever simulation.
https://nautil.us/why-conscious-ai-is-a-bad-bad-idea-302937/
The purpose of #design is to #create new or/and different #artificial structures (artifacts), so speaking of design makes sense only if it is in the context of other creative #production activities such as writing, painting, engineering, manufacturing, etc.
Klaus Krippendorff has a nice description of the difference between #object and #artifact and the relationship to Gibson's #affordances in this 2007 paper published in "Kybernetes":
https://researchgate.net/publication/45597493_The_Cybernetics_of_Design_and_the_Design_of_Cybernetic
However, he is wrong, IMO, in accentuating the difference between #scientists and #designers.
Every designer is often a scientist in "describing what can be observed" and every scientist also has to design new hypotheses, theories, and experiments for the "not yet observable and measurable".
#Kihbernetics is the study of #Complex #Dynamical #Systems with #Memory which is quite different from other #SystemsThinking approaches. Kihbernetic theory and principles are derived primarily from these three sources:
1️⃣ #CE_Shannon's theory of #Information and his description of a #Transducer,
2️⃣ #WR_Ashby's #Cybernetics and his concept of #Transformation, and
3️⃣ #HR_Maturana's theory of #Autopoiesis and the resulting #Constructivism
Although equally applicable to any dynamical system with memory (mechanisms, organisms, or organizations) the Kihbernetic worldview originated from my helping navigate organizations through times of #change.