🚨 Our story on a AI-inspired model of cerebro-cerebellar networks is now out in @NatureComms with a few (useful) updates after peer review:
doi.org/10.1038/s41467-022-356
---
RT @somnirons
New preprint by @boven_ellen @JoePemberton9 with Paul Chadderton and Richard Apps @BristolNeuroscience! Inspired by DL algorithms @maxjaderberg @DeepMind we propose that the cerebellum provides the cerebrum with task-specific feedback pred…
twitter.com/somnirons/status/1

Learning without backpropagation is really taking off in 2022

First, @BAPearlmutter et al show in "Gradients without Backpropagation" that a single forward pass with perturbed weights is enough to compute unbiased estimate of gradients:
arxiv.org/abs/2202.08587

Then, Mengye Ren et al show in "Scaling Forward Gradient With Local Losses" that the variance of doing this is high, but can be reduced by doing activity perturbation (as in Fiete & Seung 2006), but more importantly, having many "local loss" functions:
arxiv.org/abs/2210.03310

Then Jeff Hinton takes the "local loss" to another level in "Forward-Forward Algorithm", and connects it to a ton of other ideas e.g. neuromorphic engineering, one shot learning, self supervised learning, ...: cs.toronto.edu/~hinton/FFA13.p

It looks like #MachineLearning and #Neuroscience are really converging.

Gradients without Backpropagation

Using backpropagation to compute gradients of objective functions for optimization has remained a mainstay of machine learning. Backpropagation, or reverse-mode differentiation, is a special case within the general family of automatic differentiation algorithms that also includes the forward mode. We present a method to compute gradients based solely on the directional derivative that one can compute exactly and efficiently via the forward mode. We call this formulation the forward gradient, an unbiased estimate of the gradient that can be evaluated in a single forward run of the function, entirely eliminating the need for backpropagation in gradient descent. We demonstrate forward gradient descent in a range of problems, showing substantial savings in computation and enabling training up to twice as fast in some cases.

arxiv.org

New work from the lab out in Cell today, by En Yang and colleagues:

A brainstem integrator for self-location memory and positional homeostasis in zebrafish

cell.com/cell/fulltext/S0092-8

I've shared the TikZ code for
@leaduncker and my author contribution matrix as an Overleaf template [ overleaf.com/latex/examples/au ]. Easy to customize authors, rows, and colors!

Inspired by @SteinmetzNeuro , @jsiegle, @internationalbrainlab and others mentioned at go.nature.com/3hOgVHL


I am a postdoc at Stanford University, working with Krishna Shenoy. In collaboration with many experimental and computational colleagues, I study the neural mechanisms that control movement, and more broadly, how neural populations spanning interconnected brain regions perform the distributed computations that drive skilled behavior. I develop experimental and computational tools to understand the neural population dynamics that establish speed and dexterity.

I aim to discover insights into brain-wide computations in health and in neurological disease, with an eye towards identifying effective, targeted neuromodulation to treat movement disorders.

I also build open source tools:
- djoshea.github.io/neuropixel-u
- lfads.github.io/lfads-run-mana
- github.com/djoshea/eraasr
- github.com/djoshea/haptic-cont

Looking forward to joining the growing neuro community here!

Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.