#SpikeSorting : where would be the best place (or handle / hashtag) to ask questions about Phy? 🙏
And if you use it, can you let me know, maybe we can help each other?
(This Phy: https://phy.readthedocs.io/en/latest/)
We are hiring a new postdoc in cerebellar imaging! Come join the imaging group that is most serious and obsessed about the most irrelevant (according to some) part of the brain :-)
@NicoleCRust
I think some of the most exciting ideas in neuroscience start out as half-baked ideas that where 'not even wrong'.
But I think we should have a time-limit on them. 5 years I'd say. If a new idea still doesn't make any testable prediction after 5 years of being written about and discussed, maybe it wasn't such a good idea after all.
My prime example: "The cerebellum is a forward model for motor control and cognition". A really cool idea, influential, and has motivated tons of experiments. However, it has also become clear that without additional specifying assumptions the idea in itself does not make actual predictions - or can 'predict' anything.
So we need to stop pretending that "cerebellum is a forward model" is a theory - it's not. It's a hazy make-me-feel-good notion that may become testable with additional assumptions - and it is those assumption that form the real theory.
Mind blown yesterday by George Sugihara, who explained that variables can causally influence one another but also be uncorrelated. It happens with a Lorenz attractor, where variables flip between correlated and anticorrelated (so no net correlation). Video here:
https://www.youtube.com/watch?v=6i57udsPKms
These types of mirage correlations that come and go also happens in the wild - such as in the factors that combine to form the red tide and in gene expression networks.
@elduvelle @chrisXrodgers I will - I am lucky to organize a workshop on my favorite topic ;-). Our working title: Neural mechanisms of sequence learning and execution, along with Jonathan Michaels, Kiah Hardcastle and Naama Kadmon Harpaz.
The recording of this VVTNS talk is now up on youtube: https://www.youtube.com/watch?v=bb6i6gLRdlg. I was lucky to be afforded 50 min to present our results about how the brain architecture can support flexible, efficient & robust internally-driven motor control. A big thank you to the organizers of wwtns.online, David Hansel and Ran Darshan!
For those who may be interested, I'm lucky & honored to talk at the Van Vreeswijk Theoretical Neuroscience Seminar
tomorrow 11/01 at 11 AM EST. Instructions to join: https://wwtns.online. Thanks to the longer format, I'll be able to dive deeper into the topic. I'm looking forward to the discussion!
Hi all 😃
Our latest #Review on #SplitterCells is now published in @eLife !!
I will probably write a real thread on it when I get a chance... for now:
link: https://elifesciences.org/articles/82357
why: some neurons in the #Hippocampus (and other brain regions) of #Rats (and other mammals) have the fascinating ability to discriminate not just different presents, but different past or future states or trajectories in the same current situation. They could be related to #EpisodicMemory or #DecisionMaking 🤔They are called 'trajectory-dependent cells' or Splitter Cells. 🔀 We tried to make sense of them!
what: Hippocampal Splitter cells do a lot of puzzling stuff. For example there's a lot of them even in tasks that do not require the Hippocampus to be solved. They spread asymmetrically on a linear track leading to a choice point - 'past' splitters around the start and 'future' splitters towards the choice point. #TimeCells cells can be splitter cells (but they're usually #PlaceCells). Splitter cells evolve with experience, or maybe it is performance, nobody really knows. ⁉️ ... and a lot more weird stuff
conclusion: Two different computational models, the temporal context model and the latent state model, each explain a subset of the properties of splitter cells... so perhaps the Hippocampus implements both! But more experiments are needed to disentangle them 😄
now what: questions or comments? Please let us know!! ✍️
Delighted to write a preview w/@sainsbury_tom in Cell about the latest cool finding from Yang, Kanodia & Silvia Arber about the cortical control of brain stem during forelimb behaviors https://authors.elsevier.com/a/1gNBfL7PXiqy8
What happens when you give Recurrent Neural Networks brain-inspired constraints of 3D spatial structure & neural communication during learning?
🧠🌐🤖
In our new project we show typical structural & functional #neuroscience motifs like modularity, small-worldness, functional clusters, mixed selectivity & efficiency emerge in these spatially-embedded RNNs
#Preprint
https://www.biorxiv.org/content/10.1101/2022.11.17.516914v1
🚨 Our story on a AI-inspired model of cerebro-cerebellar networks is now out in @NatureComms with a few (useful) updates after peer review:
https://doi.org/10.1038/s41467-022-35658-8
---
RT @somnirons
New preprint by @boven_ellen @JoePemberton9 with Paul Chadderton and Richard Apps @BristolNeuroscience! Inspired by DL algorithms @maxjaderberg @DeepMind we propose that the cerebellum provides the cerebrum with task-specific feedback pred…
https://twitter.com/somnirons/status/1493881849055227906
@elduvelle @tiago @networkscience @academicchatter It's a good point that the disruption index (DI) chosen in Park et al is not perfect - though it does correlate with human-labeled novelty, see https://direct.mit.edu/qss/article/1/3/1242/96102/Are-disruption-index-indicators-convergently-valid . A positive DI in Park et al requires citations of the focal paper but none or few of its refs. This is indeed harder to do with larger citation lists, in particular if people cite older works not necessarily because they are still the canon but possibly for discussion. That said, I think that there are measures beyond the DI that also argue for an increased difficulty to produce and get recognition for 'disruptive' work - both in the Park et al paper (Cf. their fig. 6 about the diversity of scientific knowledge used; or fig.3 about the vocabulary of papers) and elsewhere. For instance, Chu and Evans (https://www.pnas.org/doi/10.1073/pnas.2021636118) focused on the Gini coefficient of citations, the duration of dominance of papers, or the probability of a paper to gradually become very cited. In the context of comparing the success of innovative papers among different types of researchers, Hofstra et al. used ML to quantify the presence of new conceptual linkage within papers (https://www.pnas.org/doi/10.1073/pnas.1915378117), and showed that the categories of people who recently started to join academia (under-represented minorities) produced more 'innovative' works that had a hard time getting cited. I think that this converging evidence from many different measures and analyses methods suggests the merit of the underlying hypothesis that disruptive/novel contributions have a hard time being seen and valued, perhaps more today than in the past for an ensemble of reasons. Even if the effect size is probably much smaller than suggested by the DI in Park et al ;-).
🧠🐒🐐🦘🦙🦌🐷🐻❄️🦫🦁🐑🐇🐈🦔🐕🦇🦭🦥🦓
We’ve been diving into the mesmerising anatomical diversity and evolution of cerebellar folding across 56 mammalian species with @r3rt0 Nicolas Traut @AleAliSousa @sofievalk
https://www.biorxiv.org/content/10.1101/2022.12.30.522292v1
Check it out in a short tooting thread 🔽
2022 saw a whirlwind of #neuroAI research. Brain Dall-E. Neurons in a dish playing pong. GPT predicts how brains process language. I read through a bunch of papers so you don't have to. Read my review of 2022. Featuring the work of @kordinglab , @tyrell_turing , @TimKietzmann and many others.
https://xcorr.net/2023/01/01/2022-in-review-neuroai-comes-of-age/
@elduvelle Thank you ! Seeing you there has really helped me to decide to engage! Always looking forward to hearing from you from all possible communication channels ;-)
Is there yet a mastodon alternative for #tweeprint?
Anyway, here goes a #mastoprint 🪩
"Evaluating the statistical similarity of neural network activity and connectivity via eigenvector angles"
https://doi.org/10.1016/j.biosystems.2022.104813
#NeuralNetworks #Neuroscience #Statistics #CompNeuro
I'm very excited to finally see this published. Let me tell you about it:
🧵 1/5
#Introduction
Hi all, I'm finally introducing myself in this new year of 2023! I am a postdoctoral researcher doing #neuroscience, often with a theoretical angle. My own work tends to focus on neural network dynamics, network architecture and (pre)motor control, but my interests are broad and I am trying to use Mastodon to broaden them even more! I also have a special interest in sociology, especially the sociology of #academia. I'm looking forward to learning from the other users, and to communicating about my own research!
No wonder I always feel like the end gets further away the more I walk towards it. 🤔
Hippocampal spatial representations exhibit a hyperbolic geometry that expands with experience https://www.nature.com/articles/s41593-022-01212-4
#Introduction
Hi all, I'm a spatial cognition postdoctoral researcher and will post about rats (like these), #PlaceCells (yes it's a new hashtag) and other cool things that the #Hippocampus does, as well as random stuff.
If I have bad habits from "that other site", please help me correct them, and as a Mastodon newbie, do send me your tips or tutorials!
PS: do rats need content warning?
Academics in psychology, neuroscience, cognitive science, etc.: I've created a repository for our accounts to help people find and bulk-follow each other on mastodon:
https://kaitclark.github.io/mastodon-psychology/
Please let me know if you want to be added!
(based on the repository by https://social.tchncs.de/@perspektivbrocken in Sociology)
Theoretical neuroscientist trying to connect with other scientists, listen and learn.