"Structured cerebellar connectivity supports resilient pattern separation" Nguyen, Thomas et al. in @darbly's lab nature.com/articles/s41586-022

Spectacular work based on connectomic reconstruction from nanometre-resolution volume electron microscopy and computational modelling that contributes novel findings in cerebellar microcircuitry:

"both the input and output layers of the circuit exhibit redundant and selective connectivity motifs, which contrast with prevailing models. Numerical simulations suggest that these redundant, non-random connectivity motifs increase the resilience to noise at a negligible cost to the overall encoding capacity. This work reveals how neuronal network structure can support a trade-off between encoding capacity and redundancy, unveiling principles of biological network architecture with implications for the design of artificial neural networks."

One of the two first authors, Logan Thomas @lathomas42 is on mastodon, as is the senior author Wei Lee @darbly. Welcome! And what a spectacular paper on . Those must be the prettiest Purkinje cell renderings since the century-old famous ones from Cajal. This time with synapses though!

@albertcardona Thank you for so kindly describing Tri Nguyen’s and @lathomas42’s paper. Indeed, Purkinje cells are amongst the most “elegant and luxurious” extracted out of our EM data.

Folks can see more for themselves here: github.com/htem/cb2_project_an

@albertcardona @darbly This just came across my feed as a complete layperson, and I am fascinated.

It makes some intuitive sense to me that maximizing encoding capacity increases sensitivity to noise. But could anyone provide me a starting point to understanding how the neuronal structure mapped here increases noise resilience?

@callieconnected @albertcardona. Noise is complex. A system’s robustness to noise can be improved in different ways, for example by minimizing variability or increasing signal size and separability.

We think the network we mapped is able to do this because neurons sample more redundantly than expected from specific inputs. Those inputs may be less noisy or convey more relevant information. This is interesting because prevailing models assume random, non-specific sampling, which is thought to optimize the amount of information a network can encode.

If interested, this review may be useful: cell.com/neuron/fulltext/S0896

@darbly @albertcardona Thank you so much for the explanation and resource! I'm about to go down a rabbit hole.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.