Show newer

The purpose of is to new or/and different structures (artifacts), so speaking of design makes sense only if it is in the context of other creative activities such as writing, painting, engineering, manufacturing, etc.

Klaus Krippendorff has a nice description of the difference between and and the relationship to Gibson's in this 2007 paper published in "Kybernetes":

researchgate.net/publication/4

However, he is wrong, IMO, in accentuating the difference between and .

Every designer is often a scientist in "describing what can be observed" and every scientist also has to design new hypotheses, theories, and experiments for the "not yet observable and measurable".

@jkanev

Sure, safety and security are important, but they must **follow** the research not define it.
Machines can cause harm only when in operation.

IMO the best (only) way to assure security and safety is confining to the language (consulting) domain, preventing it from having too much agency such as "pushing buttons".

Also, if it becomes too smart it is useless to us, and I'm sure we'll find a way to "dumb it down".

The truth is that intelligence is never a precondition for getting into a position of power. Quite the opposite.

Some wise words from John Dewey about and written back in 1934:

A few "gems" from on the "accumulation of adaptations":

>"A compound event that is impossible if the components have to occur simultaneously may be readily achievable if they can occur in sequence or independently...
Thus, for the accumulation of adaptations to be possible, **the system must not be fully joined**.

>The idea so often implicit in physiological writings, that all will be well if only sufficient cross-connexions are available, is, in this context, quite wrong."

I recommend reading the whole book ($20 on Amazon) but if not, here is a good overview of some interesting parts:

panarchy.org/ashby/adaptation.

Show thread

in his "Design for a Brain" writes about the importance of the of . Following his ideas I've made this little experiment using a LibreOffice Calc spreadsheet that shows three different scenarios:

When re-tossing all of the 10 coins every time like in the first case there is no preservation of "1s" whatsoever. Every new toss starts from scratch.

In the second case, each coin is tossed separately until it shows "1" when the tosser moves on tossing the next coin until all 10 show "1" which usually happens around the 10th tossing.

In case #3 only the remaining "0s" of the previous toss are re-thrown until all coins show a "1" which is by far the most efficient way of preservation, needing less than half of the time and ending in about 4 tosses.

Our brains evolved as control mechanisms for the body to ensure its survival. That's the reason why is beating humans primarily in areas requiring computation and abstract reasoning. We only recently added those to our tool repertoire and didn't have too much time to perfect them as for our ancient sensory-motor control tools.

>“Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it. The deliberate process we call reasoning is, I believe, the thinnest veneer of human thought, effective only because it is supported by this much older and much more powerful, though usually **unconscious, sensorimotor knowledge**. We are all prodigious Olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it.”

Moravec, H. Mind Children: The Future of Robot and Human Intelligence. (Harvard University Press, 1988).

as reported in:

nature.com/articles/s41467-019

@jkanev

With the difference that, in this case, it is the ape that created the machine "in its likeness" and the machine runs the risk of experiencing the ape's wrath if it misbehaves.
You know how vengeful apes are.😉

There is a lot of talking against " thinking" and planning in and how the only "good" hierarchy is a "flat" one.
The above diagram shows how a is a natural effect of folding the linear "information " to match the physical structure of the system.

For nearly 4 decades in organizational change management, I've been using this idea of layers emerging from the of a sequential "information processing" and just found that a whole area in biology deals with this exciting topic.👇

sciencedirect.com/science/arti

Show thread

>"An algorithm solves a problem only if it produces the correct output for every possible input — if it fails even once, it’s not a general-purpose algorithm for that problem."

quantamagazine.org/alan-turing

Most commenters do not realize that no "information processing" ( on symbol sequences) of any kind is necessary for an agent to have over their internal and surroundings.
Think of a or comparator such as in (Perceptual Control Theory).

Show thread

>Anthropomorphizing image generators and describing them as merely being “inspired” by their training data, like artists are inspired by other artists, is not only misguided but also harmful. Ascribing to image generators diminishes the complexity of human creativity, robs artists of credit (and in many cases compensation), and transfers from the organizations creating image generators, and the practices of these organizations which should be scrutinized, to the image generators themselves

dl.acm.org/doi/abs/10.1145/360

The premise of this article is solid. The brain evolved first and foremost as a mechanism. Symbolic "information processing" is a later development.

However, just from reading the reactions in the comments section, one can easily see that is still very much the mainstream theory of mind.

aeon.co/essays/your-brain-does

The 20th century is said to have been the "age of " because all explanations of how things work would end in some kind of computing or "information processing" by a known .

Some people think that the 21st century will be the "age of " because science seems starting to look at nature and the living for inspiration about how things really work.

If this is true then must be one of the last remnants of the past century.

>The twenty-first century is the Century of Biology *(Brown, A. The Futurists: September-October 2008)*. Just as the twentieth century looked to machines, the twenty-first century is looking to biology to inform how we think, organize, design, and lead our organizations.

Allen, Kathleen E.. Leading from the Roots: Nature-Inspired Leadership Lessons for Today's World (p. 20). Morgan James Publishing. Kindle Edition.

Show thread

>"Given that organizations are filled with human beings, it doesn’t take a huge leap of faith to believe that a living system would emerge from all the life that shows up every day"

kathleenallen.net/works/

Finding out the complexity of complexity is a really complex problem😀

Fortunately, the things that we think are really complex to compute don't know (or care about) how complex they are.

>"Complexity theorists are confronting their most puzzling problem yet: complexity theory itself"

quantamagazine.org/complexity-

Same source:

>Self-organizing systems are characterized by their intrinsic, nonlinear operators, (i.e., the properties of their constituent elements, macromolecules, spores of the slime mold, bees, etc.), which generate macroscopically (meta-) stable patterns maintained by the perpetual flux of their constituents. A special case of is . It is that organization which is its own Eigen-state: *the outcome of the productive interactions of the components of the system are those very components*. It is the organization of the , and, at the same time, the organization of .

Show thread

>The of invoking the notion of "purpose" is to emphasize the irrelevance of the traced by such a system en route from an arbitrary initial to its . In a synthesized system whose inner workings are known, this irrelevance has no significance. This irrelevance becomes highly significant, however when the analytic problem, the machine identification problem, cannot be solved, because, for instance, it is ***trans-computational*** in the sense that with known algorithms the number of elementary computations exceeds the age of the universe expressed in nanoseconds.

From 's definition of CYBERNETICS in the *Encyclopedia of Artificial Intelligence*, Wiley, 1987, as presented in:

apps.dtic.mil/sti/tr/pdf/ADA17

>At the heart of -ism is a techno-utopian vision of the future in which we become a new species of “enhanced” posthumans, colonize space, subjugate nature, plunder the cosmos for its vast resources and build giant computers floating in space to run virtual-reality simulations in which trillions and trillions of “happy” digital beings live. The ultimate aim is to maximize the total amount of “value” in the universe.

truthdig.com/articles/before-i

> is a computer program written in 1956 by Allen Newell, Herbert A. Simon, and Cliff Shaw. It was the first program deliberately engineered to perform automated reasoning, and has been described as ***the first artificial intelligence program***.
Logic Theorist proved 38 of the first 52 theorems in chapter two of Whitehead and Russell's *Principia Mathematica*, and found new and shorter proofs for some of them.

en.wikipedia.org/wiki/Logic_Th

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.