our review article on continual learning in natural and artificial agents is now out in TINS! sciencedirect.com/science/arti.
with @SaxeLab and Timo Flesch

random thought...
it's nearly 4 years since Rich Sutton wrote his Bitter Lesson blogpost: incompleteideas.net/IncIdeas/B
he wrote before the explosion of interest in large transformer models. His claim is that "maximally general methods are always better". Maximally general means two things: 1) avoid human priors about cognition; 2) avoid human training data. It's interesting to reflect that he was both very right and very wrong.
- very right, that simple algorithms massively scaled can give you (decent) systematicity, without the need for symbolic bells and whistles. That's not to say that there aren't still important ingredients of intelligence missing in large transfomer models. But the level of composition you get in large generative models is much more impressive than most of us predicted. So he was broadly right about (1)
- very wrong, because the really impressive stuff relies on volumes of human feedback. As RLHF comes to the fore, it's become clear that self-play is only going to work for a very narrow set of problems. You need human data in spades.

For anyone who might consider doing a PhD in Granada, Spain this is an excellent opportunity: websepex.com/2023/01/14/seis-o
Join a great team in one of the most beautiful cities in the world :)

re: ChatGPT as a co-author...there is nothing new under the sun. When Newell and Simon wrote up their work on the Logic Theorist in 1955 (which found a novel proof for one of the theorems in Russell's Principia Mathematica), they listed the AI as a co-author.
Also nothing new under the sun: the paper was rejected for being insufficiently novel.

RT @KiaNobre
Mark Stokes (@StokesNeuro), RIP
You enriched us with your fortitude and gentleness.
You changed our scientific views with your brilliant mind.
Thank you. Now brighten the stars.

RT @TheBrunoCortex
VISUAL NEURO JOB OPENING:
The University of Oxford's Dept.Phys.Anat. & Genetics and Pembroke College are recruiting an Associate Professor of Neuroscience specializing in vision. πŸ‘€πŸ§ More details at my.corehr.com/pls/uoxrecruit/e

@UniofOxford @DPAGAthenaSwan @OxNeuro @PembrokeOxford

happy to share this new preprint - we asked how goals warp the representations of allocentric space in human BOLD signals
biorxiv.org/content/10.1101/20

with a great team included @hugospiers @nicoschuck and lead authors Paul Muhle-Karbe and Hannah Sheahan

I am grateful to Paul Middlebrooks (not here yet) for inviting me to talk about my book on Brain Inspired today! looking forward to the discussion

We are excited to announce that Cognitive Computational Neuroscience (CCN) 2023 will take place this year in Oxford from August 24 - 27, 2023. The conference will take place at the Examination Schools – more information can be found here:
www.venues.ox.ac.uk/our-venues/examination-schools/.

–
Confirmed speakers for this year's CCN include Stan Dehaene, Helen Barron, Cate Hartley, Jay McClelland and Tim Kietzmann TimKietzmann@neuromatch.social

–

We also want to note that the paper submission period will be earlier this year than in previous years: abstract submissions will open end of January, and will close March 31.

β€”

For the most up-to-date information about CCN 2023, including reminders about deadlines, join our mailing list (mail.securecms.com/mailman/lis) and also follow us here on Twitter (twitter.com/CogCompNeuro) or Mastodon (mastodon.social/@CogCompNeuro@)

Please boost!!

OK, finally giving up on twitter. looking forward to the discussion here!

Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.