Show newer

Yay! John Hopfield and Geoffrey Hinton won the physics Nobel for their work on neural networks... work that ultimately led to modern-day machine learning.

Some of you are wondering why they got a *physics* Nobel.

In the 1980s, Hopfield invented the 'Hopfield network' based on how atomic spins interact in a chunk of solid matter. Each atom's spin makes it into a tiny magnet. Depending on the material, these spins may tend to line up, or point opposite to their neighbors, or interact in even more complicated ways. No matter what they do, at very low temperatures they tend to minimize energy.

A Hopfield network is a simulation of such a system that's been cleverly set up so that the spins store data, like images. When the Hopfield network is fed a distorted or incomplete image, it keeps updating the spins so that the energy decreases... and works its way toward the saved image that's most like the imperfect one it was fed with.

Later, Geoffrey Hinton and others generalized Hopfield's ideas and developed 'Boltzmann machines'. These exploit a discovery by the famous physicist Boltzmann!

Boltzmann realized that the probability that a chunk of matter in equilibrium has energy E is proportional to

exp(-E/kT)

where E is its energy, T is its temperature and k is a number now called Boltzmann's constant. When the temperature is low, this makes it overwhelmingly probable that the stuff will have low energy. But at higher temperatures this is less true. By exploiting this formula and cleverly adjusting the temperature, we can make neural networks do very useful things.

There's been a vast amount of work since then, refining these ideas. But it started with physics.

arstechnica.com/ai/2024/10/in-

Researchers at SWC have revealed how sensory input is transformed into motor action across multiple brain regions in mice.

“We found that when mice don’t know what the visual stimulus means, they only represent the information in the visual system in the brain and a few midbrain regions. After they have learned the task, cells integrate the evidence all over the brain,” Dr Michael Lohse

Read the story: sainsburywellcome.org/web/rese

Check out the full paper in Nature: nature.com/articles/s41586-024

Live mouse (skin) #tissueclearing: Turns out, yellow food dye Tartrazine reduces scattering in the red/NIR range and allows temporary transparency in live tissues (e.g. mouse skin)
Achieving optical transparency in live animals with absorbing molecules
Ou et al., Science 2024
doi.org/10.1126/science.adm686
Write-up: doi.org/10.1126/science.adr793

#invivo #imaging #microscopy

Finally had a chance to listen to this Mindscape podcast w/ Doris Tsao. SO GREAT! Whether you’ve been studying vision forever or know zero about it (a rare place to hit).

preposterousuniverse.com/podca

“That's what's great about being a vision scientist: you just open your eyes and right there is the miracle you're trying to explain.” - Doris Tsao

Really exciting to hear them talk about the big C (onsciousness) - the reason so many of us opted in (and for various reasons abandoned). To see Doris take it on explicitly: yes!!! 👏👏👏. I can’t wait to see what she & her team make of it.

After all, this book was written 30 years ago. It was all about leveraging vision to study what consciousness is all about. Things have happened but haven’t quite gelled. Maybe - just may e - we’re ready …???

en.m.wikipedia.org/wiki/The_As

@alexh

Too many people review previous work as if they are complete instead of data points. For that matter, too many people think of their own scientific results as if they are the only experiment evah and complete answers to questions.

In my lab, we never "show", nor do we ever say someone else "showed". We say we "found", and that others "found"... they found that in that experiment, under these conditions, on that day, something happened.

The "replication crisis" mostly (mostly) disappears once you move away from "paper as discovery" to "paper as one small piece of a large puzzle."

IMO, there is no such thing as a perfect study design. It is really rare that a study design can actually answer a question. Instead, those questions get answered by integrating and triangulating over many studies.

@elduvelle_neuro
Yes, ROI definition is manual, but not sure if "automated" solutions
not based on neural networks generalize well across diverse filming conditions. You can train a DLC model from sessions representative of all those conditions, and the network should generalize well if the pupil is visible.

Come work at @SWC_Neuro with us (@neuroinformatics) and scientific computing to help us manage and share lots and lots of cool neuroscience data.

sainsburywellcome.org/web/cont

@elduvelle_neuro Why not deeplabcut? Otherwise, I would start by defining an roi that includes the pupil and a bit of the sclera, then fit a 2d gaussian to the negative of the brightness signal within the roi, and get the pupil diameter as the diameter of the Gaussian's 2-sigma contour

Something big happened this weekend. Everyone is talking about it, wondering what the implications will be.

That's right: I finished writing my new book "What is Entropy?" It's just 120 pages long. It has lots of short sections, mostly one page each, each based on a tweet. This is just a draft, and I'm still fixing lots of typos and other mistakes. So grab a copy - and if you catch errors, please let me know, either here or on my blog!

It is not a pop book: it's an introduction that assumes you know calculus. But it's about a lot of big, bold concepts, and I try to really get to the bottom of them:

• information
• Shannon entropy and Gibbs entropy
• the principle of maximum entropy
• the Boltzmann distribution
• temperature and coolness
• the relation between entropy, expected energy and temperature
• the equipartition theorem
• the partition function
• the relation between entropy, free energy and expected energy
• the entropy of a classical harmonic oscillator
• the entropy of a classical particle in a box
• the entropy of a classical ideal gas

I learned a lot by trying to explain in words what people often say only in equations.

johncarlosbaez.wordpress.com/2

Could we decide if a simulated spiking neural network uses spike timing or not? Given that we have full access to the state of the network and can simulate perturbations. Ideas for how we could decide? Would everyone agree? #neuroscience #SpikingNeuralNetworks #computationalneuroscience #compneuro

Are you at and wondering how social cues influence decision-making? Check out our posters PS06-28PM-094 and -095 this Friday afternoon! Learn how freely moving mice adapt their decisions when uncertain and how we are trying to uncover the neural correlates of social choices.

At #Fens2024 tomorrow (Wednesday 26/June) - Check out our poster comparing how representations change in #hippocampus, dorsolateral #striatum, and #prefrontal cortex in rats making decisions as environmental complexity changes.

U. Mugan et al. Poster number 211. In poster session 02: prefrontal decision-making

Some really cool results looking at interactions between decision-making systems!

How are events segmented and organized in time? And how might this impact our perception and memory of time?

Check out our work here on how neural trajectories in the lateral entorhinal cortex inherently drift over time, but abruptly shift at event boundaries to discretize a continuous experience.

biorxiv.org/content/10.1101/20

x.com/EdvardMoser/status/18029

#Events #Time #Memory #Dynamics #Experience #Circuits #EntorhinalCortex #Hippocampus #AnimalBehavior #Preprint

Meet our Neuroinformatics Unit! Driven by an open science ethos, the team of research software engineers work closely with SWC and @GatsbyUCL researchers to build tools that improve data organisation, refine analysis pipelines, and more: sainsburywellcome.org/web/blog

@BorisBarbour as I said I don't have any objection to people using hypothesis testing as one small part of the process, I have an objection to making it central to the process. As you say, it's about drawing a conclusion. My assertion would be that 99.9% of the time or more it would be inappropriate to draw a conclusion from any given experiment, but that this doesn't make those experiments bad, failed or not useful. Designing everything around an approach to analysis that is rarely the most appropriate is inefficient and distorting. And secondarily, and less contentiously I guess, applying this framework to modelling work is just flat out wrong.

How do neural circuits generate flexible, cognitive behaviours? The Duan and @jerlich labs are looking for 2️⃣ excellent postdoctoral research fellows to join the team. Check out the vacancies and apply by 25 May: sainsburywellcome.org/web/cont

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.