A #representation is not something that can be found in an observer's mind. Representations are physical copies or models of the object they represent and they are all residing in the same domain external to the observer's mind.
According to #CS_Peirce, a #sign (the ***representation***) is something that brings its #interpretant (the #observer), into *the same sort of correspondence* (#state of mind) as the #object it stands for. Therefore, #information and #knowledge exist in a different domain internal to the system
#Representation = #Reproduction.
The representation can be a #copy, or the re-#production of the object using the same #substance the object is made of (e.g. a *carbon copy* of a page or a copy of a living cell). In contrast, a #model (a map) is the reproduction of the object's form in a different substance.
Unlike real (artisanal) art, the reproduction (copy) of "digital art" is indistinguishable from the original. In addition, what is usually referred to as the "digital copy" of a physical work of art, is, in fact, a digital *model* of the real object it represents.
All #learning must be open-ended. The learning agent (the #observer) must have the #autonomy to set its own learning goals as well as plan and execute a #sequence of #exploration activities to achieve these goals.
One can never learn *all existing data* but rather refine their understanding of the data that is available to them. As true for human intelligence, you can either have "deep and narrow" specialized #AI agents or "average and broad" #AGI. You can't have both in the same entity. Time and #memory "limitation" are the main inspirations for #diversity and #cooperation between learning agents.
People should have figured it out by now that the #distribution of processing power, not the #centralization in gargantuan data and control centers is the right thing to do.
Stop working on LLMs (Large Language Models) and start working on PCAs (Personal Customizable Assistants).
Introducing the *qualitative* category of #Wisdom in the triad made of *quantifiable* #Data, #Information, and #Knowledge items adds nothing to the better understanding of the matter.
Saying someone or something is "wise" is just a subjective judgment made by an external #observer about another (#observed) system's behavior *appropriateness* to the given situation in the environment without knowing anything about the observed system's internal state, goals, or motives.
In addition, a really "wise" entity would never identify itself as such.😀
>A #sign is something, A, which brings something, B, its #interpretant sign determined or created by it, into the same sort of correspondence with something, C, its #object, as that in which itself stands to C.
#CS_Peirce (1902)
In #Kihbernetics a sign is the #model describing (documenting) a #system ("mental model") abstracted from a real #phenomenon (object) by an #observer (the interpretant).
Ashby's principle of requisite #variety states, in fact, that the variety of the #controlling system must be large at least as the variety of the #controlled system .
As an *external* #observer can never have the full picture of the *internal* variety of states the controlled system can find itself in, it is obvious that, for control to be #effective, the controller must be an integral part of the same self-organized (controlled) #system.
Systems thinkers use a number of different terms for the three basic concepts in the "system's triad" so that we have a "real system" as opposed to the "conceptual system" which is sometimes also called the "mental model" which is again different from the (real) descriptive or simulation model. In #Kihbernetics we make the distinction between #Machine, #System and #Model unambiguous following the rules specified in the works of #WRAshby and #HRMaturana
Ashby warns us against our first impulse to point at the pendulum and say 'the system is that thing there' because this has a fundamental disadvantage in that "every material object contains no less than an infinity of variables" from which "different observers (with different aims) may reasonably make an infinity of different selections."
Therefore, there must first be given an #observer, and a #system is then defined as "any set of variables selected by that observer from those available on the real ‘machine‘".
#HRMaturana defines a #system, as "a #configuration of #relations that an #observer abstracts in the flow of #interactions and #transformations of a #collection of #elements distinguished in the observers daily living" that is "spontaneously or artificially #conserved" in its #dynamic within some "#domain of concern" of the observer.
So, in Kihbernetics, the triad looks like this:
It's interesting that Ashby never uses the phrase "control system" in the book. For him, it seems, the #observer is also the (potential) #controller.
Complexity is in the eye of the beholder (observer)
Ashby: "In this book I use the words “very large” to imply that some definite #observer en, with definite resources and techniques, and that the system (is in) some practical way, too large for him; so that he cannot observe completely, or control it completely, or carry out the calculations for prediction completely. In other words, he says the system (is) “very large” if in some way it beats *him* by its richness and #complexity."
p.62
#Kihbernetics is the study of #Complex #Dynamical #Systems with #Memory which is quite different from other #SystemsThinking approaches. Kihbernetic theory and principles are derived primarily from these three sources:
1️⃣ #CE_Shannon's theory of #Information and his description of a #Transducer,
2️⃣ #WR_Ashby's #Cybernetics and his concept of #Transformation, and
3️⃣ #HR_Maturana's theory of #Autopoiesis and the resulting #Constructivism
Although equally applicable to any dynamical system with memory (mechanisms, organisms, or organizations) the Kihbernetic worldview originated from my helping navigate organizations through times of #change.