@NicoleCRust @cogneurophys @charanranganath @PessoaBrain @tyrell_turing
The Barack and Krakauer basically claims that there are two views about how neural computation works: the Sherringtonian and Hopfieldian. But maybe not exactly what you are looking for?
@jerlich @NicoleCRust @cogneurophys @charanranganath @PessoaBrain
Yes, this is a great suggestion. I can't think of any other reviews of the specific impact of Hopfield Networks on memory research off the top of my head though. There are more general reviews about attractor models...
@tyrell_turing @jerlich @NicoleCRust @cogneurophys @charanranganath @PessoaBrain
It does seem like the hop field has split into memory and (continuous) attractor camps. Would be curious for someone to... unite them in one model that's All We Need™️
A good recent review on the attractor side of things: https://www.nature.com/articles/s41583-022-00642-0
@dlevenstein @tyrell_turing @NicoleCRust @cogneurophys @charanranganath @PessoaBrain
@dbarack
This is a little different, but Paul Glimcher's book (https://mitpress.mit.edu/9780262572279/decisions-uncertainty-and-the-brain/) spends a lot of time going through the history of the Sherringtonian view and how it has led us astray. In a nutshell, the Sherringtonian (which is basically a Cartesian or Pavolovian view) is that action is a reflex in response to sensory input. Paul draws on game theory (in particular the work of John Maynard-Smith) to argue that competing animals cannot be "reflexive" because we need to be unpredictable. I don't recall if he directly connects this unpredictability with attractor dynamics
@jerlich @dlevenstein @tyrell_turing @NicoleCRust @cogneurophys @charanranganath @PessoaBrain @dbarack That sounds like the perfect cue for my favorite example of what happens to animals whose behavior is reflexive: they become lunch:
https://www.youtube.com/watch?v=urBp2X5mBmQ
@jerlich
Great reminder, I really need to go back to that book.
@dlevenstein @tyrell_turing @NicoleCRust @cogneurophys @charanranganath @dbarack
@dlevenstein @tyrell_turing @jerlich @NicoleCRust @cogneurophys @charanranganath @PessoaBrain
https://arxiv.org/abs/2008.02217 ? Not what you actually want though...
@Neurograce @dlevenstein @tyrell_turing @jerlich @cogneurophys @charanranganath @PessoaBrain
Helpful - thank you!
@NicoleCRust @Neurograce @dlevenstein @tyrell_turing @jerlich @cogneurophys @PessoaBrain I'm certainly no expert on the topic, but my take is that the biggest insights come from the limitations of the hopfield model, in terms of susceptibility to catastrophic forgetting & capacity limitaitons.
@charanranganath @NicoleCRust @Neurograce @dlevenstein @jerlich @cogneurophys @PessoaBrain
Agreed. It's been such an important model for conceptualisation and theorising in these ways.
Have you played with these "Hopfield Layer" networks?
@jerlich No, not myself
@jerlich @cogneurophys @charanranganath @PessoaBrain @tyrell_turing Yes, that broadens it out even beyond memory. Thank you!