Producing electricity while protecting your crops from extreme weather. Brilliant!
>The citron of Calabria in southern Italy had almost died out from extreme weather and lack of economic value. But growing the crop under a canopy of solar panels has given the fruit a new lease of life – with lessons for many climate-stressed crops.
Philosopher **Mary Brenda Hesse**
>considered the use of #metaphors and #analogies in scientific models.
Instead of obsessing over the justification of scientific knowledge, she highlighted the need to think about its generation. How do scientists develop their ideas about the world and come to discover new things?
The cognitive power of metaphors, in her view, resided in their *capacity to create similarity*. The use of metaphors is **an act of co-creating, not discovering, similarities between a metaphor and its physical target system**. Such an act of metaphorical co-creation is inevitably shaped by cultural context.
https://aeon.co/essays/why-are-women-philosophers-often-erased-from-collective-memory
Ruskin defines:
>#Science = The #knowledge of things whether Ideal or Substantial.
#Art = The #modification of Substantial things by our #Substantial Power.
#Literature = The #modification of Ideal things by our #Ideal Power.
https://openlibrary.org/books/OL6715877M/The_eagle%27s_nest.
The whole argument of *AI-generated art* discussed in the article below is moot. Whatever it is that #AI generates it certainly isn't #art.
>"In science you must not talk before you know. In art you must not talk before you do. In literature you must not talk before you think."
Ruskin - The Eagle's Nest (1872)
AI is not "doing" #science or #art or #literature. AI is just talking.
@MarkRubin @philosophyofscience @philosophy
#Science is a ***way*** of #Knowing of the scientific #system, much like #religion is the way of knowing of #belief systems. You may argue that philosophy is yet another *way of knowing* separate from science and religion.
Epistemology in particular deals with different ways (systems) of knowing, or the question of *how we know what we know*, while philosophy of science deals with just one (the scientific) way of knowing.
#Knowledge is ***not a stock*** that is accumulated, stored, and distributed by the system, but represents instead the #state of that system at any particular point in time.
Here is a simple depiction (model) of a generic ***dynamical system*** such as science with the ability to #learn and #grow, and #produce useful outputs:
I think that it is you that might be looking at it from the wrong perspective and attributing humanity where there is none.
For Lanier (and I agree with him on that) #AI is just another #tool developed to do some work for us (or for some of us just serve as a plaything, a refined #tamagotchi).
WRT the "alignment problem", I'm not sure I want somebody else to align my tools for me. Ideally, they should come out of the box with some commonly agreed generic values and knowledge about the world, but after that, I'd like to be able to fine-tune and train them to serve the purpose I need them for.
That solves another problem, that of who is **responsible** for the actual harm their output may inflict on other people. You don't blame the gun for murder, you prosecute the one who pulled the trigger. I don't see why it should be different with AI.
I like Jaron Lanier's approach to "#AI". It can "kill us all" as most of all the other #technology we invent. He rises some very interesting points in this article such as:
>Think of people. People are the answer to the problems of bits.
If society, economics, culture, technology, or any other spheres of activity are to serve people, that can only be because we decide that people enjoy a special status to be served.
https://www.newyorker.com/science/annals-of-artificial-intelligence/there-is-no-ai
>We can work better under the assumption that there is no such thing as #AI. The sooner we understand this, the sooner we’ll start managing our new #technology intelligently.
Jaron Lanier
https://www.newyorker.com/science/annals-of-artificial-intelligence/there-is-no-ai
Paraphrased #HH_Pattee from the same source:
This "#error_duality" (error in the #descriptions or error in the #operation) can be identified at all organizational levels:
➡️ Computer programs may have either an error in the program itself (#software error) or an error may happen because of the machine (#hardware error) that executes an otherwise correct program.
➡️ At higher levels it is possible to make an error in the choice of algorithm which is being programmed or even make a mistake in the choice of problem that the developed algorithm is supposed to solve.
➡️ Similarly in social and political organizations we distinguish between a faulty policy and the failure to execute a policy properly.
In other words, we try to distinguish between the ***error in our #models*** of reality which leads to incorrect policies (#predictions and descriptions), and the ***error in #control #constraints*** which leads to a failure of a (good) policy implementation.
Biological physicist #HH_Pattee on the futility of the attempts to create artificial #intelligence by reverse engineering #language.
>"No amount of #semiotic information, thought, or discourse alone can cause the body to move. It takes some #physics. As Waddington has pointed out, the first function of #language was to cause #actions, not to make #statements."
According to #HH_Pattee there are
>“two meanings for #machine and two meanings for #failure”:
>“By the *machinery of nature* we mean the failure-proof #laws that we assume underlie the predictable behavior of matter. When we find certain types of #natural events unpredictable we assume that our description or theory of these events are failures, but not the events themselves.”
>On the other hand, while we assume that the #rules of arithmetic are not subject to failure, it is clear that a physical machine #designed to execute these rules may fail all too often.”
https://www.academia.edu/863887/The_role_of_instabilities_in_the_evolution_of_control_hierarchies
#LLMs, as their name suggests, are bound to the domain of #language, and are not able to #learn from other types of interaction, such as (physical) #sensation or show #emotion. They have no #agency except for answering user prompts to the best of their (large and static) #knowledge.
There is no point in debating if they have #intelligence or not until they are given the capability to ask #questions (generate their own prompts) that may mean they can show "#curiosity" about other aspects of the topic at hand that is clearly aimed to update their current knowledge, which may thereupon shed some light on their "#intentions".
Also from Stanford & Google:
#AI Index Report 2023
https://aiindex.stanford.edu/wp-content/uploads/2023/04/HAI_AI-Index-Report_2023.pdf
#Language and #Math are arbitrary mutually agreed constructs used for #communication.
Different constructs can be used to describe the *same* #reality.
>In this paper, we introduce generative agents - computational software agents that simulate believable human behavior.
Generative agents wake up, cook breakfast, and head to work; artists paint, while authors write; they form opinions, notice each other, and initiate conversations; they remember and reflect on days past as they plan the next day.
Except for the fact that all those activities should been in parentesis (they don't really "cook breakfast", "paint" or "write") an interesting study of #emergent #behavior based solely on #language.
Generative Agents: Interactive Simulacra of Human Behavior
>Behaviour science, evolutionary developmental #biology and the field of #machine #intelligence all seek to understand the scaling of biological #cognition: what enables individual cells to integrate their activities to result in the #emergence of a novel, higher-level intelligence with large-scale #goals and*competencies that belong to it and not to its parts*?
**The scaling of goals from cellular to anatomical #homeostasis: an #evolutionary #simulation, experiment and analysis**
https://royalsocietypublishing.org/doi/10.1098/rsfs.2022.0072
>"The American elite thought that Asians, when they became middle-class, would then be like them, and are disappointed when they are not."
https://www.noemamag.com/refreshing-western-liberalism/
This is actually not true. Asian #elite is very similar to the American or any other of the global world 1% elites.
All #elite regardless of its origin: #hereditary, #economic or #political has these two things in common:
1⃣ They want to preserve the elitist position for themselves and their families, and
2⃣ They really don't care about anything else.
I wonder why people are talking at all about this imaginary, not even futuristic, "#sentient_AI" nonsense that is just occurring in someone's head, instead of being more interested in all the really useful and cool stuff that is safely done with #AI_tools like this one:
Just discovered there is an interesting etymological link between the words #Stance and #System by which a ***system*** may be defined as "*having the same stance*" or "*standing together*"
***Stance***
>"comes from the Italian "*stanza*" which means stopping place (*like a room within the house*). Your stance is something that's not likely to change. You have stopped there, your decision is made. You're done."
https://www.vocabulary.com/dictionary/stance
Origin:
>***stā-***, Proto-Indo-European root meaning "to stand, set down, make or be firm," with derivatives meaning "place or thing that is standing."
e.g. Afghanistan - the place of the Afghani peoples, and in
>Greek ***histēmi*** "put, place, cause to stand; weigh,"
https://www.etymonline.com/word/stance
***System***
>Greek ***systema*** "organized whole, a whole compounded of parts," from stem of *synistanai* "to place together, organize, form in order," from syn- "together" (see syn-) + root of histanai "cause to stand," from PIE root *sta- "to stand, make or be firm."
The #Rythm of #Life is in the #Recursion process where #Information is generated when:
>"our rhythmic #expectations are violated, (*and*) our brains behave in a different manner because of our inherent (*innate*) internal sense of rhythm."
https://thereader.mitpress.mit.edu/the-extraordinary-ways-rhythm-shapes-our-lives/
#Kihbernetics is the study of #Complex #Dynamical #Systems with #Memory which is very different from all other #SystemsThinking approaches. Kihbernetic theory and principles are derived primarily from these three sources:
1️⃣ #CE_Shannon's theory of #Information and his description of a #Transducer,
2️⃣ #WR_Ashby's #Cybernetics and his concept of #Transformation, and
3️⃣ #HR_Maturana's theory of #Autopoiesis and the resulting #Constructivism
Although applicable to any dynamical system with memory (mechanisms, organisms, or organizations) we developed our Kihbernetic worldview mostly to help people navigate their #organization through times of #change.
We define* an organization as:
"An integrated composite of people, products, and processes that provide a capability to satisfy a stated need or objective."
*Definition of the word "system" in MIL_STD_499B
#People are at the forefront of our thinking (the #who and #why are we doing this for and/or with?).
We then focus our efforts on understanding all the functions or #Processes in your organization (#how and #when something happens or has to happen?).
Finally, we get to analyze the #Products and/or services that you put on the market but are mostly interested in the tools that you use or may need to buy or develop in order to fully integrate your production system (the plan for #what and #where things will happen?).
Our goal is to make the people of your organization self-reliant to the point that they shouldn't need our assistance with the continuous maintenance and adaptation of the system.
In any case, we've got your back while you do the heavy lifting of establishing a better future for your organization!