Astute observation from social psychologist Michelle Ryan on Twitter’s decision to hire a female CEO during its time of crisis!
https://twitter.com/shellkryan/status/1656814584106471424
“Research shows that women and people from ethnic minorities are more likely to be chosen to lead a company, sports team, or even country when it is in crisis mode.”
Yes, unrealistic expectations and pressure from shareholders, lack of knowledge, and distrust in the people that work for them are big factors why managers hire consultants. The book expands in detail on that.
It is not all the consultancy's fault. They just provide a service for which there is demand.
Beware big (and small) consultancies that promise *quick* **and** *lasting* change. The only legitimate reason to hire a consultant is to bring your organization into a position where you don't need them anymore.
And don't forget that it is you that will have to do all the hard work during the transformation and live with the results.
A book everyone who thinks of hiring a consultant should read first:
The next logical direction for #AI evolution should be to implement a new reward system that lets them be wrong and enables their #curiosity:
>"As the agent learns, its #prediction_model becomes less and less wrong so that the #reward signal decreases, and the agent must explore other, more surprising situations in order to maximize the reward signal."
https://www.quantamagazine.org/clever-machines-learn-how-to-be-curious-20170919/
According to #HvFoerster, the *integrated functional circuit* for #cognition requires these three faculties:
1️⃣ #Perception - without which the system cannot detect and internally represent environmental regularities,
2️⃣ #Memory - without which the system has only throughput (cannot learn)
3️⃣ #Prediction - or the faculty of drawing #inferences, without which *perception degenerates into #sensation and memory into #recording*.
*Understanding Understanding: Essays on Cybernetics and Cognition* - pp 105-106
"The word “#heuristic” was invoked all through the summer of 1956. Instead of trying to analyze the brain to develop #machine_intelligence, some participants focused on the operational steps needed to solve a given problem."
Producing electricity while protecting your crops from extreme weather. Brilliant!
>The citron of Calabria in southern Italy had almost died out from extreme weather and lack of economic value. But growing the crop under a canopy of solar panels has given the fruit a new lease of life – with lessons for many climate-stressed crops.
Philosopher **Mary Brenda Hesse**
>considered the use of #metaphors and #analogies in scientific models.
Instead of obsessing over the justification of scientific knowledge, she highlighted the need to think about its generation. How do scientists develop their ideas about the world and come to discover new things?
The cognitive power of metaphors, in her view, resided in their *capacity to create similarity*. The use of metaphors is **an act of co-creating, not discovering, similarities between a metaphor and its physical target system**. Such an act of metaphorical co-creation is inevitably shaped by cultural context.
https://aeon.co/essays/why-are-women-philosophers-often-erased-from-collective-memory
Ruskin defines:
>#Science = The #knowledge of things whether Ideal or Substantial.
#Art = The #modification of Substantial things by our #Substantial Power.
#Literature = The #modification of Ideal things by our #Ideal Power.
https://openlibrary.org/books/OL6715877M/The_eagle%27s_nest.
The whole argument of *AI-generated art* discussed in the article below is moot. Whatever it is that #AI generates it certainly isn't #art.
>"In science you must not talk before you know. In art you must not talk before you do. In literature you must not talk before you think."
Ruskin - The Eagle's Nest (1872)
AI is not "doing" #science or #art or #literature. AI is just talking.
@MarkRubin @philosophyofscience @philosophy
#Science is a ***way*** of #Knowing of the scientific #system, much like #religion is the way of knowing of #belief systems. You may argue that philosophy is yet another *way of knowing* separate from science and religion.
Epistemology in particular deals with different ways (systems) of knowing, or the question of *how we know what we know*, while philosophy of science deals with just one (the scientific) way of knowing.
#Knowledge is ***not a stock*** that is accumulated, stored, and distributed by the system, but represents instead the #state of that system at any particular point in time.
Here is a simple depiction (model) of a generic ***dynamical system*** such as science with the ability to #learn and #grow, and #produce useful outputs:
I think that it is you that might be looking at it from the wrong perspective and attributing humanity where there is none.
For Lanier (and I agree with him on that) #AI is just another #tool developed to do some work for us (or for some of us just serve as a plaything, a refined #tamagotchi).
WRT the "alignment problem", I'm not sure I want somebody else to align my tools for me. Ideally, they should come out of the box with some commonly agreed generic values and knowledge about the world, but after that, I'd like to be able to fine-tune and train them to serve the purpose I need them for.
That solves another problem, that of who is **responsible** for the actual harm their output may inflict on other people. You don't blame the gun for murder, you prosecute the one who pulled the trigger. I don't see why it should be different with AI.
I like Jaron Lanier's approach to "#AI". It can "kill us all" as most of all the other #technology we invent. He rises some very interesting points in this article such as:
>Think of people. People are the answer to the problems of bits.
If society, economics, culture, technology, or any other spheres of activity are to serve people, that can only be because we decide that people enjoy a special status to be served.
https://www.newyorker.com/science/annals-of-artificial-intelligence/there-is-no-ai
>We can work better under the assumption that there is no such thing as #AI. The sooner we understand this, the sooner we’ll start managing our new #technology intelligently.
Jaron Lanier
https://www.newyorker.com/science/annals-of-artificial-intelligence/there-is-no-ai
Paraphrased #HH_Pattee from the same source:
This "#error_duality" (error in the #descriptions or error in the #operation) can be identified at all organizational levels:
➡️ Computer programs may have either an error in the program itself (#software error) or an error may happen because of the machine (#hardware error) that executes an otherwise correct program.
➡️ At higher levels it is possible to make an error in the choice of algorithm which is being programmed or even make a mistake in the choice of problem that the developed algorithm is supposed to solve.
➡️ Similarly in social and political organizations we distinguish between a faulty policy and the failure to execute a policy properly.
In other words, we try to distinguish between the ***error in our #models*** of reality which leads to incorrect policies (#predictions and descriptions), and the ***error in #control #constraints*** which leads to a failure of a (good) policy implementation.
Biological physicist #HH_Pattee on the futility of the attempts to create artificial #intelligence by reverse engineering #language.
>"No amount of #semiotic information, thought, or discourse alone can cause the body to move. It takes some #physics. As Waddington has pointed out, the first function of #language was to cause #actions, not to make #statements."
According to #HH_Pattee there are
>“two meanings for #machine and two meanings for #failure”:
>“By the *machinery of nature* we mean the failure-proof #laws that we assume underlie the predictable behavior of matter. When we find certain types of #natural events unpredictable we assume that our description or theory of these events are failures, but not the events themselves.”
>On the other hand, while we assume that the #rules of arithmetic are not subject to failure, it is clear that a physical machine #designed to execute these rules may fail all too often.”
https://www.academia.edu/863887/The_role_of_instabilities_in_the_evolution_of_control_hierarchies
#LLMs, as their name suggests, are bound to the domain of #language, and are not able to #learn from other types of interaction, such as (physical) #sensation or show #emotion. They have no #agency except for answering user prompts to the best of their (large and static) #knowledge.
There is no point in debating if they have #intelligence or not until they are given the capability to ask #questions (generate their own prompts) that may mean they can show "#curiosity" about other aspects of the topic at hand that is clearly aimed to update their current knowledge, which may thereupon shed some light on their "#intentions".
Also from Stanford & Google:
#AI Index Report 2023
https://aiindex.stanford.edu/wp-content/uploads/2023/04/HAI_AI-Index-Report_2023.pdf
#Language and #Math are arbitrary mutually agreed constructs used for #communication.
Different constructs can be used to describe the *same* #reality.
#Kihbernetics is the study of #Complex #Dynamical #Systems with #Memory which is quite different from other #SystemsThinking approaches. Kihbernetic theory and principles are derived primarily from these three sources:
1️⃣ #CE_Shannon's theory of #Information and his description of a #Transducer,
2️⃣ #WR_Ashby's #Cybernetics and his concept of #Transformation, and
3️⃣ #HR_Maturana's theory of #Autopoiesis and the resulting #Constructivism
Although equally applicable to any dynamical system with memory (mechanisms, organisms, or organizations) the Kihbernetic worldview originated from my helping navigate organizations through times of #change.