Learning without backpropagation is really taking off in 2022
First, @BAPearlmutter et al show in "Gradients without Backpropagation" that a single forward pass with perturbed weights is enough to compute unbiased estimate of gradients:
https://arxiv.org/abs/2202.08587
Then, Mengye Ren et al show in "Scaling Forward Gradient With Local Losses" that the variance of doing this is high, but can be reduced by doing activity perturbation (as in Fiete & Seung 2006), but more importantly, having many "local loss" functions:
https://arxiv.org/abs/2210.03310
Then Jeff Hinton takes the "local loss" to another level in "Forward-Forward Algorithm", and connects it to a ton of other ideas e.g. neuromorphic engineering, one shot learning, self supervised learning, ...: https://www.cs.toronto.edu/~hinton/FFA13.pdf
It looks like #MachineLearning and #Neuroscience are really converging.
QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.