I learned from a WP article (see alt text) that Norbert #Wiener was involved in the political discussion of mitigating the effects of the technological revolution which industrial and administrative #automation brought about.
The 2019 edition of #Cybernetics is freely available form MIT press (URL in alt text).
It's related to my work regarding the boundaries of a system A that objectivizes a system B that it intents to control, the problem of #grounding, #semantics and #classes. #ai is old.
@tg9541
Could you elaborate the grounding and information problems, and in what sense Wiener didn't see them?
@psybertron Wiener, as a gifted mathematician saw mathematical law in everything that he could apply a method to. The result is formalism and syntax over semantics in the foundation of Cybernetics. The definition of Shannon entropy, much cited in Wiener's work, doesn't consider semantics of symbols, just the capacity of a channel. 20 years after Cybernetics he wrote "Nonlinear problems in random theory" (Wiener, Norbert, 1966). I recommend reading his take on analyzing the brain as a black-box.
@tg9541
Thanks. So by "grounding" you mean the real world semantic being the basis (or not) for the syntactical representation?
@psybertron syntactical representation is, in fact, optional. The reflection of signals on the agent and the environment (in any form) is sufficient. For an extreme example compare Rodney, Brooks. "Intelligence without representation." Artificial Intelligence 47.1-3 (1991): 139-159.
Note: strangely enough that emerged from MIT culture
@tg9541
So, yes, I guess?
I got the optional "or not" ;-)
(I'm sceptical that a semantic necessarily exists at the fundamental information level, other than the original signal in its simplest uninterpreted form - but thanks for the refs. Pretty sure I've seen Brooks before, but I'll take another look.)
@psybertron Brooks failed to show that representation is optional in mindful agents. Structure and syntax will play a role whenever mechanisms are used to enlarge the design space. This doesn't solve the problem of grounding. The agent can't be a mechanism.
Semantics, information, meaning, systems, models, representations, purpose ... those are all categories we "invented" to explain what we ***think*** to other thinkers.
The primary concern in Cybernetics and automation always was (and still is) #control of things within its environment, with representation, modeling, or #explanation used only to the extent that serves this primary function.
Now, an agent doesn't need to "understand" the "semantics" of what they are doing. Agents like living cells, for example, just have to respond correctly to the syntax of the genetic message, like any other *mechanism*, to jointly produce a more complex organism agent able to "understand" and communicate what's going on and not necessarily for control purposes.
Agree. I think Ashby's "noble" objective should be the real purpose:
>Cybernetics offers one set of concepts that, by having exact
correspondences with each branch of science can thereby bring them into exact relation with one other.
But then many, including Wiener, had a narrow misguided view when worrying about the "wrong use" of cybernetic automation in "enslaving" humanity.