our review article on continual learning in natural and artificial agents is now out in TINS! sciencedirect.com/science/arti.
with @SaxeLab and Timo Flesch

A recurrent network model of planning explains hippocampal replay and human behavior

When interacting with complex environments, humans can rapidly adapt their behavior to changes in task or context. To facilitate this adaptation, we often spend substantial periods of time contemplating possible futures before acting. For such planning to be rational, the benefits of planning to future behavior must at least compensate for the time spent thinking. Here we capture these features of human behavior by developing a neural network model where not only actions, but also planning, are controlled by prefrontal cortex. This model consists of a meta-reinforcement learning agent augmented with the ability to plan by sampling imagined action sequences drawn from its own policy, which we refer to as 'rollouts'. Our results demonstrate that this agent learns to plan when planning is beneficial, explaining the empirical variability in human thinking times. Additionally, the patterns of policy rollouts employed by the artificial agent closely resemble patterns of rodent hippocampal replays recently recorded in a spatial navigation task, in terms of both their spatial statistics and their relationship to subsequent behavior. Our work provides a new theory of how the brain could implement planning through prefrontal-hippocampal interactions, where hippocampal replays are triggered by - and in turn adaptively affect - prefrontal dynamics. ### Competing Interest Statement The authors have declared no competing interest.

www.biorxiv.org

random thought...
it's nearly 4 years since Rich Sutton wrote his Bitter Lesson blogpost: incompleteideas.net/IncIdeas/B
he wrote before the explosion of interest in large transformer models. His claim is that "maximally general methods are always better". Maximally general means two things: 1) avoid human priors about cognition; 2) avoid human training data. It's interesting to reflect that he was both very right and very wrong.
- very right, that simple algorithms massively scaled can give you (decent) systematicity, without the need for symbolic bells and whistles. That's not to say that there aren't still important ingredients of intelligence missing in large transfomer models. But the level of composition you get in large generative models is much more impressive than most of us predicted. So he was broadly right about (1)
- very wrong, because the really impressive stuff relies on volumes of human feedback. As RLHF comes to the fore, it's become clear that self-play is only going to work for a very narrow set of problems. You need human data in spades.

re: ChatGPT as a co-author...there is nothing new under the sun. When Newell and Simon wrote up their work on the Logic Theorist in 1955 (which found a novel proof for one of the theorems in Russell's Principia Mathematica), they listed the AI as a co-author.
Also nothing new under the sun: the paper was rejected for being insufficiently novel.

RT @KiaNobre
Mark Stokes (@StokesNeuro), RIP
You enriched us with your fortitude and gentleness.
You changed our scientific views with your brilliant mind.
Thank you. Now brighten the stars.

RT @TheBrunoCortex
VISUAL NEURO JOB OPENING:
The University of Oxford's Dept.Phys.Anat. & Genetics and Pembroke College are recruiting an Associate Professor of Neuroscience specializing in vision. 👀🧠More details at my.corehr.com/pls/uoxrecruit/e

@UniofOxford @DPAGAthenaSwan @OxNeuro @PembrokeOxford

happy to share this new preprint - we asked how goals warp the representations of allocentric space in human BOLD signals
biorxiv.org/content/10.1101/20

with a great team included @hugospiers @nicoschuck and lead authors Paul Muhle-Karbe and Hannah Sheahan

I am grateful to Paul Middlebrooks (not here yet) for inviting me to talk about my book on Brain Inspired today! looking forward to the discussion

We are excited to announce that Cognitive Computational Neuroscience (CCN) 2023 will take place this year in Oxford from August 24 - 27, 2023. The conference will take place at the Examination Schools – more information can be found here:
www.venues.ox.ac.uk/our-venues/examination-schools/.


Confirmed speakers for this year's CCN include Stan Dehaene, Helen Barron, Cate Hartley, Jay McClelland and Tim Kietzmann TimKietzmann@neuromatch.social

We also want to note that the paper submission period will be earlier this year than in previous years: abstract submissions will open end of January, and will close March 31.

For the most up-to-date information about CCN 2023, including reminders about deadlines, join our mailing list (mail.securecms.com/mailman/lis) and also follow us here on Twitter (twitter.com/CogCompNeuro) or Mastodon (mastodon.social/@CogCompNeuro@)

Please boost!!

OK, finally giving up on twitter. looking forward to the discussion here!

Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.