ᛕᎥᕼᗷᗴᖇᑎᗴ丅Ꭵᑕᔕ

@tg9541

I believe Rosen is complaining here only about one particular type of #reductionism - #computationalism.

I think he was very well aware that all anticipatory systems must maintain some (reduced) # model of reality in order to **anticipate** how things in their environments that may affect them are likely to unfold.
Science cannot dispense of "good reductionism" such as, for example, Searle's Biological Naturalism.

The excerpt is from R. L. Kuhn's "Landscape of Consciousness"

sciencedirect.com/science/arti

SelfAwarePatterns

Spurred by a couple of recent conversations, I’ve been thinking about computation in the brain.

It was accelerated this week by the news that the connectome of the fly brain is complete, a mapping of its 140,000 neurons and 55 million synapses. It’s a big improvement over the 302 neurons of the C. Elegans worm, which were mapped decades ago. Apparently there are already new computational models built on the data, including modeling a fly’s taste of sugar and of its vision.

It raises the question of what kind of paradigm shifts we might eventually see from these mappings, and the debates about current ones, such as computationalism and dynamical system views, or between the variety of computationalism.

There are many variations of the computational theory of mind. For this post, I’m going to group them into three broad categories.

The first is not held by serious theorists, but is often the way lay people understand it, and the version typically attacked by critics. It’s the idea that the brain works like a general purpose programmable computer. I have to admit I once thought this. I didn’t blink when Agent Smith downloaded a copy of himself from the Matrix into a human’s brain, Neo having skills instantly downloaded into his brain, or other similar sci-fi scenarios.

But this view doesn’t survive a casual understanding of neuroscience. A person’s personality is thought to be encoded in hundreds of trillions of synapses, the connections between their neurons. Unlike random access memory or disk storage in a computer, there’s no mechanism to update synapses en masse, except by neural activity over time. So Agent Smith can’t just copy himself in, at least not to the organic parts of the human’s brain.

The second type of computationalism is often called “Turing Machine” or “machine state” functionalism. It’s the idea that the brain computes using well defined operations. It’s often coupled with representationalism, Mentalese (“language of thought”), and other concepts. It’s the set of assumptions that underlies much of cognitive science from the late 1900s. It resembles the first version above, but with an understanding that brain state transitions are more stochastic than deterministic, no expectation that we’re talking about something programmable, and other adjustments due to biological realities.

Steven Pinker made a convincing case for this version of computationalism in his book: How the Mind Works. Although even on my initial reading, the idea of Mentalese felt dubious, like too much of a projection of how we might engineer a brain rather than what evolution actually did.

The second version is now considered the classic computational theory of mind. In the 1980s, it started to be challenged by the third version, connectionism, the idea of a neural network, where the processing happens in a massively parallel and distributed fashion, and the nodes have connections whose strength vary continuously rather than discretely.

Which recognizes the analog nature of how brains work. Often the on/off nature of neural action potentials are cited as being discrete in a digital fashion. But the frequency of firing carries informational significance, and it varies continuously, along with the gradual build up triggered from input synapses that leads to the cell firing.

Importantly, this paradigm has no built in symbols in it, just neural processing through constantly updated connections. For decades, there were debates between “implementationist” and “eliminative” connectionists, about whether neural networks were just modeling something at a lower level of description that the classic theories modeled at a higher level, or whether the idea of symbolic computation in the brain was just a mistaken idea ripe for elimination.

For a long time, buttressed by Pinker’s arguments, I was in the implementationist camp. But I realized this week that it’s been a long time since I’ve felt comfortable with symbolic descriptions of brain processes. Somewhere along the line, when reading about neuroscience, or about the recent progress with artificial neural networks, my confidence in the symbolic paradigm eroded. As I noted in a post last year, there are major differences between how a neural network works vs how the device you’re using to read this works, differences symbolic approaches often overlook.

Of course, this is still computation. Artificial neural networks have historically been implemented on digital computers. (Neuromorphic computing may change that.) And the processors in these computers can be thought of as networks of logic gates, a technology actually inspired by the first paper on neural computation in 1943. So they’re still the same type of dynamics. In principle, any neural process can be implemented in a Turing Machine type system, and vice versa.

And I’m not necessarily opposed to using symbols to understand neural processing. But I now feel like they must be used cautiously, with an eye kept on the underlying implementation details. Representations, for instance, have to be understood, not as contiguous images in the brain, but as distributed neural firing patterns that evolve with time. We have to be on guard not to slip into thinking too much in the ways technological computers work.

It’s worth noting that while connectionist networks with their artificial neurons are much more biologically plausible than traditional computational models, they’re still abstract simplifications. Real biological neurons remain much more complex. They probably always will be. The question is which way of abstracting them provides insights and progress. As computational neuroscience continues to develop, the answers will likely evolve.

A lot of people in the embodied cognition camp think that that evolution will lead us away from computation. Embodied cognition is the idea that the mental processes can only be understood in relation to the brain being embedded in a body and environment, and the enactive engagement between them. It extends the mind into the body and environment. Some in this camp think that a dynamical systems view will ultimately prove a better model for the brain.

The embodied movement seems to get a lot right. It does make sense to view cognition as integrated with the brain’s environment. But the more radical factions in this camp, I think, go overboard. The dynamical view might provide insights in some areas, such as muscle coordination, but it’s hard to see how it scales up into full cognition. Every computational system is also a dynamical one. The question is what level of description is more useful. And the desire from many in this camp to eliminate representations as a concept, ironically, has a behaviorist feel to it.

The old school behaviorists seemed motivated to pull psychology away from the freewheeling introspective methods of their predecessors, and focus on what could be measured. They went overboard in denying that mental states had explanatory value. Eventually it was realized that if computers can have internal states that explain their output, there was no reason to think minds couldn’t either, which allowed psychology to break out of this mindset.

The most radical views in the embodied camp feel like a reaction against classic computationalism, and to some extent the lay person’s understanding of it. But it seems to risk falling back to a paradigm that denied mental states, or at least mental content.

So while I think a modest understanding of the embodied, embedded, enactive, and extended paradigm can provide insights on the types of computations that are happening, I don’t see it as a complete alternative to computation, at least not yet.

Which means, for now, I remain a computational functionalist, albeit one now more in the connectionist camp than I had realized.

What do you think? Are there reasons to still favor the classic computational approaches? Does the embodied movement challenge computationalism more than I’m thinking? Or should we be looking at some completely different paradigm?

Featured image credit

https://selfawarepatterns.com/2024/10/06/classic-and-connectionist-computationalism/

#computationalism #connectionism #Consciousness #Mind #Neuroscience #Philosophy #PhilosophyOfMind

ᛕᎥᕼᗷᗴᖇᑎᗴ丅Ꭵᑕᔕ

The premise of this article is solid. The brain evolved first and foremost as a #control mechanism. Symbolic "information processing" is a later development.

However, just from reading the reactions in the comments section, one can easily see that #computationalism is still very much the mainstream theory of mind.

aeon.co/essays/your-brain-does

Your brain does not process information and it is not a computer | Aeon Essays

Your brain does not process information, retrieve knowledge…

Aeon
ᛕᎥᕼᗷᗴᖇᑎᗴ丅Ꭵᑕᔕ

The 20th century is said to have been the "age of #machines" because all explanations of how things work would end in some kind of computing or "information processing" by a known #mechanism.

Some people think that the 21st century will be the "age of #biology" because science seems starting to look at nature and the living #organism for inspiration about how things really work.

If this is true then #Computationalism must be one of the last remnants of the past century.

Teixi

» more theoretical thinking and less (unthinking) #ml or less confusion between #machinelearning and theory

Then #AI can be a useful theoretical tool...

The thesis of #computationalism implies that it is possible in principle to understand human cognition as a form of computation.
However, this does not imply that it is possible in practice to computationally (re)make cognition «

psyarxiv.com/4cbuv

#philosophyofneuroscience
#systemsneuroscience
#neuroscience
#neurotheory
@cognition

ᛕᎥᕼᗷᗴᖇᑎᗴ丅Ꭵᑕᔕ

1943 - The year when it all started:
#Cybernetics, #Computationalism, #ANN
From: *Brains, Machines, and Mathematics*
by: *Michael A. Arbib*