Following on the idea that the theories we will need to tackle the complexity of the brain have not been developed yet (e.g. mastodon.social/@NicoleCRust/1)

What types of up and coming theoretical(ish) frameworks are you most excited about? Dynamical systems / RNNs? Topology? Network theory? Something else entirely?

@complexsystems @cogneurophys @PessoaBrain @SussilloDavid @carlosbrody @Neurograce @neuralreckoning @tyrell_turing @DrYohanJohn @cian @WiringtheBrain @tdverstynen @neuralengine (Anyone?)

@NicoleCRust @complexsystems @cogneurophys
@PessoaBrain @carlosbrody @Neurograce @neuralreckoning @tyrell_turing @DrYohanJohn @cian

I know the question’s framing is about what theory we are missing, but I firmly believe if we could measure (some statistically reliable sample of) all neurons’ spiking, our current difficulties would be over very quickly.

@SussilloDavid @NicoleCRust @complexsystems @cogneurophys @PessoaBrain @carlosbrody @Neurograce @neuralreckoning @tyrell_turing @DrYohanJohn can’t we do that already in eg c elegans and zebrafish larvae?
Also imo spikes are just the tip of the iceberg. Lots of intra and inter-cellular chemical computation which may be fundamental.

@cian @SussilloDavid @NicoleCRust @complexsystems @cogneurophys @PessoaBrain @carlosbrody @Neurograce @neuralreckoning @tyrell_turing

Yup I was thinking of C. elegans too.

Also, artificial systems should make us skeptical of the benefits of a neural panopticon. Even in human-designed systems with full transparency, there are emergent phenomena that require additional theoretical understanding. For example, in Conway's Game of Life there are higher order structures that are unanticipated.

@DrYohanJohn @cian @NicoleCRust

C. Elegans, with all the neuropeptide signaling, seems more a 300-body conversation of gene networks. If that's what's going on in 86-billion neuron networks, then indeed we have some theoretical problems.

Nevertheless... skeptical of the benefits of a neural panopticon? No way! :)

Neuroscience is empirical science. If we had a NP, then I think we'd have a new set of theoretical problems, but the absolute floundering right now would be a thing of the past.

@DrYohanJohn @cian @NicoleCRust

Neural panopticon makes measurement sound bad. Why be in neuroscience if you think that measuring all the neural state is something to be skeptical of?

As for artificial systems, if we had the full power of the cog/systems/molecular neuro community to understand GPT-3, I think we could easily come to some level of useful understanding.

As for Game of Life or other exotic systems, sure, there are probably exotic phenomenon very difficult to understand.

@SussilloDavid @DrYohanJohn @cian @NicoleCRust I am not the full power of the neuro community, but when I try to understand even simple trained NNs using the tools of neuroscience I feel pretty adrift at sea. So yes, our current data-limited regime in experimental neuroscience is a problem above all else, but I am not entirely sure what good things we would do with the data if we had it all.

@Neurograce @SussilloDavid @DrYohanJohn @cian @NicoleCRust But neuroscience tools are getting there. For instance, take a convolutional neural network and image activity of its neurons with a 2p microscope. Apply rabies tracing to one neuron to find its presyn neurons. Relate the activity of presyn neurons to postsyn neuron. Discover that it's a weighted sum followed by a static nonlinearity. You've discovered how the network works.

@MatteoCarandini @Neurograce @SussilloDavid @DrYohanJohn @cian @NicoleCRust Ah... no. You've discovered how individual units operate, perhaps. But the network's working is a whole other beast

@dbarack @Neurograce @SussilloDavid @DrYohanJohn @cian @NicoleCRust In a convolutional neural network there are only two equations, describing (1) how a units is driven by other units; (2) how learning updates weights. Mechanistically, there is nothing else to understand about it other than those 2 equations. The experiment I described would reveal the 1st. Revealing the 2nd would require a more complex experiment.

@MatteoCarandini @dbarack @Neurograce @SussilloDavid @cian @NicoleCRust

This is where the Game of Life analogy is helpful. The low level pixel-flip laws are the entire 'mechanism'. But on their own they did not directly allow anyone to anticipate gliders and other phenomena.

For theoretical neuroscience, it may be that many phenomena are analogous to gliders. They are *allowed* by the mechanisms, but not explicitly predicted.

@DrYohanJohn @MatteoCarandini @dbarack @SussilloDavid @cian @NicoleCRust An added complexity in ANNs---and the brain---is that much of the emergent interesting things that they do depend on the data they are given, not just unit activation functions and learning rules in the abstract. It is the complicated mix of these things that we want to understand, usually by putting it into the intermediate language of algorithms/computations.

@Neurograce @DrYohanJohn @MatteoCarandini @dbarack @SussilloDavid @cian

I think it's useful to circle back to some toy examples that some of us have talked about before. Do we all agree that the feedforward Perceptron does not have emergence? But that the Hopfield net does, as a consequence of its recurrence? (we agreed on that before).

If so, the next question is: does a CNN have emergence? Or does that only happen in the deepens once you add recurrence?

Follow

@SussilloDavid @MatteoCarandini @Neurograce @NicoleCRust @cian @dbarack @DrYohanJohn really? I think CNN qualify as emergent in the sense that a trained CNN had properties that the same number of disconnected nodes does not have. I agree that networks with internal dynamics are qualitatively different than those without but “emergence” is not a word I would choose to distinguish between them.

@jerlich @SussilloDavid @MatteoCarandini @Neurograce @cian @dbarack @DrYohanJohn
If we restrict ourselves to the single layer perceptron, I think the statement is true?

The question of exactly where emergence happens seems to be a bit open.

Either way, one take home is that the space of brain things that have emergence is vastly larger than the space that does not. And therefore none of us should be reductionists.

Also: we need better ways to carve up the vast conceptual space.

@NicoleCRust @jerlich @SussilloDavid @MatteoCarandini @cian @dbarack @DrYohanJohn I suppose I probably wouldn't be interested in studying emergent properties of a single-layer perceptron. But once you add a hidden layer I think you can claim emergence (according to the only comprehensible definition of it to me, which is on the "weak" side)

@NicoleCRust @jerlich @SussilloDavid @MatteoCarandini @Neurograce @cian @dbarack @DrYohanJohn

I know there is a ton of writing on what is or isn't emergent, and have only read a smattering of them. But, I think emergence and reduction are complementary concepts, it's an analytic perspective. Atoms are emergent, for example. So, it's not where emergence happens, it's a question of what makes something self-organized at whatever level of organization is being examined. Ism's just don't help.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.