New computational model of #HippocampalReplay:
A recurrent network model of planning explains hippocampal replay and human behavior
By Jensen et al.
It looks super interesting! But let me tell you: this is not what hippocampal replay does in rodents… replay is not for planning.🤷♂️
@elduvelle Thanks for posting! I’m curious: Could you elaborate why replay in rodents is not for planning?
@lnnrtwttkhn yes! There is no replay at the time of planning in rodents. Well, in rats at least. All replay is linked to consuming reward - some form of consolidation, but very likenot planning. There is maybe 1 exception that I can think of which is the shock zone study and the data is not super convincing…
Also, I am currently analyzing data from an experiment where we specifically address this, and it confirms it… sad, but it is what it is :)
@lnnrtwttkhn @elduvelle This more recent study casts some real doubt on replay as planning: https://doi.org/10.1016/j.neuron.2021.07.029
@elduvelle @beneuroscience @ak_gillespie Thank you very much for the pointer, discussions and clarification!
@elduvelle @lnnrtwttkhn @ak_gillespie El do you really mean there is no replay at the planning location (but still SWR?) or that the replay seen there does not predict future choice?
@beneuroscience @lnnrtwttkhn @ak_gillespie
No or very few swr and no or very little replay, even without considering the content, and even when rate / time spent pausing is considered…
@elduvelle @lnnrtwttkhn @ak_gillespie Wow very interesting, look forward to the upcoming story! On a related point, do you believe theta sweeps during VTE reflect planning?
@beneuroscience @lnnrtwttkhn @ak_gillespie Much more likely, I would say :)
@beneuroscience @lnnrtwttkhn Indeed - I was actually going to point out to the discussion of @ak_gillespie’s paper for a list of the arguments against a (direct) role of hippocampal replay in planning :) and also more generally for the definitions of terms.
All the studies you (@lnnrtwttkhn ) mention are in continuous tasks alternating between two or more reward locations. Replay happening at those could reflect consolidation of the path that led to the current reward, or even to a previous reward, instead of planning the future path. The Pfeiffer & Foster paper (and Xu et 2019) are the only ones that go a bit beyond that. but when you actually dissociate the planning location from the goal location, as in my current experiment*, you see practically no replay at the planning location.
(*hoping to be able to tell more about this in the coming months!)