Trying things out. I will be posting mainly #neuroscience. So here is a question. What is the most important thing we need to be able to do to understand the principles on which brains work?
@dickretired Perturb brains circuits in the way that test state-space concepts of neural activity. E.g. stimulate on particular vectors to, e.g., move along a putative line attractor.
@SussilloDavid @dickretired I second this take from David. I would add, we need this for testing learning algorithms as well, bc we really need to be able to set specific activitystates to see how this induces changes downstream.
@albertcardona @tyrell_turing @SussilloDavid @dickretired Do you think this level of detail is as necessary in larger more complex brains? It seems if state spaces offer the right level of explanation then individual nodes of the network could rightfully be abstracted away.
@albertcardona @tyrell_turing @SussilloDavid @dickretired Not dismiss and maybe not unnecessary, but perhaps less of a priority? Just curious whether you think the level of explanation/abstraction should scale with the network complexity.
@beneuroscience @albertcardona @tyrell_turing @SussilloDavid
Good question. It can’t. What we need to understand are the principles. We will never be able to comprehend a working human brain. Drosophila, Angel Fish perhaps.
@dickretired @albertcardona @tyrell_turing @SussilloDavid If principles == shared principles among all brains, OK. Alternatively, new principles may emerge in more complex brains, and revealing them may require more abstraction.
Agreed. But evolution tends to use what has already worked. so studying nematode worms, drosophila and angel fish (as some are) may be a good place to start.
@beneuroscience @albertcardona @tyrell_turing @dickretired
Sidestepping the connectome question, it's worth noting that if perturbation vectors are directly aligned to neurons, that's *really useful.* It's even more useful if they are aligned to neurons of a specific cell-type. So state space is useful, but the details really matter, too, IMO.
E.g. see https://www.sciencedirect.com/science/article/pii/S0092867422011138
@beneuroscience @albertcardona @tyrell_turing @dickretired
In this paper, we found an optotagged cell class in the medial habenula, which by all appearances looks like a line attractor. If we could stimulate the population of neurons (and some math simplifies), then we'd be one step closer to understanding integration in (mouse) brains.
@SussilloDavid
Sounds very cool and it's been on my reading list, thanks :)
@beneuroscience @tyrell_turing @SussilloDavid @dickretired Seems premature to dismiss data as unnecessary only because it’s hard to acquire.