Learning without backpropagation is really taking off in 2022

First, @BAPearlmutter et al show in "Gradients without Backpropagation" that a single forward pass with perturbed weights is enough to compute unbiased estimate of gradients:
arxiv.org/abs/2202.08587

Then, Mengye Ren et al show in "Scaling Forward Gradient With Local Losses" that the variance of doing this is high, but can be reduced by doing activity perturbation (as in Fiete & Seung 2006), but more importantly, having many "local loss" functions:
arxiv.org/abs/2210.03310

Then Jeff Hinton takes the "local loss" to another level in "Forward-Forward Algorithm", and connects it to a ton of other ideas e.g. neuromorphic engineering, one shot learning, self supervised learning, ...: cs.toronto.edu/~hinton/FFA13.p

It looks like and are really converging.

@giacomoi thanks for flagging this Hinton paper

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.