Hello world, it's time to do an #introduction! I'm a neuroscientist (postdoc) currently based in Göttingen, Germany. I have broad interests in #neuroscience, including sensory coding, neural circuits, and sensorimotor transformations.
So far, I studied how the retina processes natural scenes using a combination of large-scale electrophysiology and computational modeling.
I'll be soon leaving the tidy entryway to the visual system to venture into understanding cortical processing during decision-making.
Huge congrats to @karyna-mi.bsky.social (shes not on here) for her paper published today in Science! She found that the hippocampus is really important for a key strategy we use to make decisions called hidden state inference! 🧪 🧠 1/7 https://www.science.org/doi/10.1126/science.adq5874 #neuroscience
We wrote a review on analysis methods for large-scale neural recordings https://www.science.org/stoken/author-tokens/ST-2239/full @marius10p #neuroscience 🧠
Anything we missed? Reply w/ your fav method!
What does #neuroscience tell us about AI, and vice versa?
In this new PNAS paper, we find that real neurons' activation functions (f-I curves) share features with freq-used AI activation functions.
We measure many neurons w/ 2p holographic stim.
Work led by Paul LaFosse... 1/3 🧠📈 🧪
Functional diversity in the output of the primate retina https://www.biorxiv.org/content/10.1101/2024.10.31.621339v1?med=mas
New programme by Dutch Research Council NWO: academic journals that wish to 'flip' from a subscription model to diamond open access (no fees for readers - no fees for authors) can apply for funding to transition: https://www.nwo.nl/en/news/funding-for-flipping-journals-to-diamond-open-access
#OpenAccessWeek
2/ More recent patch recording from Large Bistratified GCs show a consistent ON-OFF response to the L+M stimulus, quite distinct from that of the small BS GCs. These results also suggest that the input from L and M cone circuitry to the LBGCs might be distinct (Kim, Packer & Dacey, 2024). How?
https://www.pnas.org/doi/10.1073/pnas.2405138121
📢 #Rastermap paper out now, easily explore and visualize your neural recordings 🐭🐒🐟 and ANNs 🤖 https://nature.com/articles/s41593-024-01783-4. Updates from preprint include analyses of primate data and IBL task data.
Video tutorial: https://youtu.be/oQHq7yUWn2k #Neuroscience #MachineLearning
Yay! John Hopfield and Geoffrey Hinton won the physics Nobel for their work on neural networks... work that ultimately led to modern-day machine learning.
Some of you are wondering why they got a *physics* Nobel.
In the 1980s, Hopfield invented the 'Hopfield network' based on how atomic spins interact in a chunk of solid matter. Each atom's spin makes it into a tiny magnet. Depending on the material, these spins may tend to line up, or point opposite to their neighbors, or interact in even more complicated ways. No matter what they do, at very low temperatures they tend to minimize energy.
A Hopfield network is a simulation of such a system that's been cleverly set up so that the spins store data, like images. When the Hopfield network is fed a distorted or incomplete image, it keeps updating the spins so that the energy decreases... and works its way toward the saved image that's most like the imperfect one it was fed with.
Later, Geoffrey Hinton and others generalized Hopfield's ideas and developed 'Boltzmann machines'. These exploit a discovery by the famous physicist Boltzmann!
Boltzmann realized that the probability that a chunk of matter in equilibrium has energy E is proportional to
exp(-E/kT)
where E is its energy, T is its temperature and k is a number now called Boltzmann's constant. When the temperature is low, this makes it overwhelmingly probable that the stuff will have low energy. But at higher temperatures this is less true. By exploiting this formula and cleverly adjusting the temperature, we can make neural networks do very useful things.
There's been a vast amount of work since then, refining these ideas. But it started with physics.
We’re excited to welcome @markdhumphries author of “The Spike,” as our new columnist. Check out his first piece, in which he explores how averaging is a convenient fiction of neuroscience.
Researchers at SWC have revealed how sensory input is transformed into motor action across multiple brain regions in mice.
“We found that when mice don’t know what the visual stimulus means, they only represent the information in the visual system in the brain and a few midbrain regions. After they have learned the task, cells integrate the evidence all over the brain,” Dr Michael Lohse
Read the story: https://www.sainsburywellcome.org/web/research-news/brain-wide-decision-making-dynamics-discovered
Check out the full paper in Nature: https://www.nature.com/articles/s41586-024-07908-w
Live mouse (skin) #tissueclearing: Turns out, yellow food dye Tartrazine reduces scattering in the red/NIR range and allows temporary transparency in live tissues (e.g. mouse skin)
Achieving optical transparency in live animals with absorbing molecules
Ou et al., Science 2024
https://doi.org/10.1126/science.adm6869
Write-up: https://doi.org/10.1126/science.adr7935
A new mystery from the land of automatic publication.
https://pubpeer.com/publications/27EFCC358271321ED3680598F2BFCF#2
Finally had a chance to listen to this Mindscape podcast w/ Doris Tsao. SO GREAT! Whether you’ve been studying vision forever or know zero about it (a rare place to hit).
“That's what's great about being a vision scientist: you just open your eyes and right there is the miracle you're trying to explain.” - Doris Tsao
Really exciting to hear them talk about the big C (onsciousness) - the reason so many of us opted in (and for various reasons abandoned). To see Doris take it on explicitly: yes!!! 👏👏👏. I can’t wait to see what she & her team make of it.
After all, this book was written 30 years ago. It was all about leveraging vision to study what consciousness is all about. Things have happened but haven’t quite gelled. Maybe - just may e - we’re ready …???
Too many people review previous work as if they are complete instead of data points. For that matter, too many people think of their own scientific results as if they are the only experiment evah and complete answers to questions.
In my lab, we never "show", nor do we ever say someone else "showed". We say we "found", and that others "found"... they found that in that experiment, under these conditions, on that day, something happened.
The "replication crisis" mostly (mostly) disappears once you move away from "paper as discovery" to "paper as one small piece of a large puzzle."
IMO, there is no such thing as a perfect study design. It is really rare that a study design can actually answer a question. Instead, those questions get answered by integrating and triangulating over many studies.
Come work at @SWC_Neuro with us (@neuroinformatics) and scientific computing to help us manage and share lots and lots of cool neuroscience data.
https://www.sainsburywellcome.org/web/content/current-vacancies
Something big happened this weekend. Everyone is talking about it, wondering what the implications will be.
That's right: I finished writing my new book "What is Entropy?" It's just 120 pages long. It has lots of short sections, mostly one page each, each based on a tweet. This is just a draft, and I'm still fixing lots of typos and other mistakes. So grab a copy - and if you catch errors, please let me know, either here or on my blog!
It is not a pop book: it's an introduction that assumes you know calculus. But it's about a lot of big, bold concepts, and I try to really get to the bottom of them:
• information
• Shannon entropy and Gibbs entropy
• the principle of maximum entropy
• the Boltzmann distribution
• temperature and coolness
• the relation between entropy, expected energy and temperature
• the equipartition theorem
• the partition function
• the relation between entropy, free energy and expected energy
• the entropy of a classical harmonic oscillator
• the entropy of a classical particle in a box
• the entropy of a classical ideal gas
I learned a lot by trying to explain in words what people often say only in equations.
https://johncarlosbaez.wordpress.com/2024/07/20/what-is-entropy/
Could we decide if a simulated spiking neural network uses spike timing or not? Given that we have full access to the state of the network and can simulate perturbations. Ideas for how we could decide? Would everyone agree? #neuroscience #SpikingNeuralNetworks #computationalneuroscience #compneuro
Are you at #FENS2024 and wondering how social cues influence decision-making? Check out our posters PS06-28PM-094 and -095 this Friday afternoon! Learn how freely moving mice adapt their decisions when uncertain and how we are trying to uncover the neural correlates of social choices.
At #Fens2024 tomorrow (Wednesday 26/June) - Check out our poster comparing how representations change in #hippocampus, dorsolateral #striatum, and #prefrontal cortex in rats making decisions as environmental complexity changes.
U. Mugan et al. Poster number 211. In poster session 02: prefrontal decision-making
Some really cool results looking at interactions between decision-making systems!
How are events segmented and organized in time? And how might this impact our perception and memory of time?
Check out our work here on how neural trajectories in the lateral entorhinal cortex inherently drift over time, but abruptly shift at event boundaries to discretize a continuous experience.
https://www.biorxiv.org/content/10.1101/2024.06.17.599402v1
https://x.com/EdvardMoser/status/1802967173196808557
#Events #Time #Memory #Dynamics #Experience #Circuits #EntorhinalCortex #Hippocampus #AnimalBehavior #Preprint
Neuroscientist postdoc. Interested in ethologically-relevant neural coding, vision and decision-making. Currently based at the Department of Basic Neurosciences (UNIGE).