I learned from a WP article (see alt text) that Norbert #Wiener was involved in the political discussion of mitigating the effects of the technological revolution which industrial and administrative #automation brought about.
The 2019 edition of #Cybernetics is freely available form MIT press (URL in alt text).
It's related to my work regarding the boundaries of a system A that objectivizes a system B that it intents to control, the problem of #grounding, #semantics and #classes. #ai is old.
@tg9541
Could you elaborate the grounding and information problems, and in what sense Wiener didn't see them?
@psybertron Wiener, as a gifted mathematician saw mathematical law in everything that he could apply a method to. The result is formalism and syntax over semantics in the foundation of Cybernetics. The definition of Shannon entropy, much cited in Wiener's work, doesn't consider semantics of symbols, just the capacity of a channel. 20 years after Cybernetics he wrote "Nonlinear problems in random theory" (Wiener, Norbert, 1966). I recommend reading his take on analyzing the brain as a black-box.
@tg9541
Thanks. So by "grounding" you mean the real world semantic being the basis (or not) for the syntactical representation?
@psybertron syntactical representation is, in fact, optional. The reflection of signals on the agent and the environment (in any form) is sufficient. For an extreme example compare Rodney, Brooks. "Intelligence without representation." Artificial Intelligence 47.1-3 (1991): 139-159.
Note: strangely enough that emerged from MIT culture
@tg9541
So, yes, I guess?
I got the optional "or not" ;-)
(I'm sceptical that a semantic necessarily exists at the fundamental information level, other than the original signal in its simplest uninterpreted form - but thanks for the refs. Pretty sure I've seen Brooks before, but I'll take another look.)
@psybertron Brooks failed to show that representation is optional in mindful agents. Structure and syntax will play a role whenever mechanisms are used to enlarge the design space. This doesn't solve the problem of grounding. The agent can't be a mechanism.
Semantics, information, meaning, systems, models, representations, purpose ... those are all categories we "invented" to explain what we ***think*** to other thinkers.
The primary concern in Cybernetics and automation always was (and still is) #control of things within its environment, with representation, modeling, or #explanation used only to the extent that serves this primary function.
Now, an agent doesn't need to "understand" the "semantics" of what they are doing. Agents like living cells, for example, just have to respond correctly to the syntax of the genetic message, like any other *mechanism*, to jointly produce a more complex organism agent able to "understand" and communicate what's going on and not necessarily for control purposes.
@Kihbernetics @psybertron I generally agree. Some thinkers, however, used formal methods to show the limits of formal methods (in the same way as Gödel used mathematical proof to show the limits of mathematical proof). Meta-thinking about system boundaries and information can be used to disproof the viability of control for identified goals, and as such it's an important tool the political discourse among thinkers. Anthropological speaking it's always possible to build a totalitarian system.
I would be very interested in any reference to examples of how to use of cybernetic thinking "to understand how to guard against totalitarian/amoral/ideological (control) purposes".
@tg9541 @Kihbernetics
Absolutely.
But Beer's VSM was just one partial interpretation of Cybernetics. Last wrote about Beer here:
https://www.psybertron.org/archives/17585
That's precisely the point I'm trying to make. Any mindset that is based on the distinction between the #controlled and the #control system, must be very attractive to would-be dictators.
@Kihbernetics @tg9541
Then we're on completely different pages - I'm talking about self-organising systems (Not controller / controlled systems)
And formal non-computability is one technical thing, but it doesn't prevent living (agent) systems actually processing information in reality.
@psybertron @Kihbernetics Self-organization of a system with some the desired property to reduce operational risk to a minimum is the holy grail. I assume that viable autopoietic systems are self-selecting. In other words: I don't believe that there is a way to deduce such systems.
@tg9541 @Kihbernetics
deduce? no
evolve? yes
Cybernetics has a peculiar relationship with the topic of self-organization. I don't recall Wiener having ever taken this matter seriously, von Foerster thought a system is "feeding" on the order from the environment, and this is Ashby talking about "self" organization requiring the presence of another machine.
That's why I had to "invent" Kihbernetics and return to the basics.😀
@Kihbernetics @psybertron Using the cybernetics mindset to guard against totalitarian tendencies of social systems is hard. On the contrary - there is evidence that totalitarian regimes liked the Beer's VSM a lot.
Jackson, Michael C. "An appreciation of Stafford Beer's ‘Viable System’viewpoint on managerial practice." Journal of management studies 25.6 (1988): 557-573.
(I got a copy from Researchgate)