Show newer

Jonny Lovelace, Jingrui Ma, ... and Vinny Augustine discovered the vagal pathway underlying fainting, really exciting work, summarized here: nature.com/articles/d41586-023 , full paper here: nature.com/articles/s41586-023 (I helped a little w/ neural analyses)

When the mouse faints, its eyes roll back and most neurons across the brain *shut off completely* (at yellow line in first figure, shows one example #neuropixels recording). But neurons in the hypothalamic PVZ increased their firing during this time period (first group in the second figure). These neurons were causally implicated: inhibition increased fainting duration while excitation increased arousal.

#neuroscience #syncope #fainting #vagal #discovery

New preprint! "Tracking neurons across days with high-density probes", by Enny Van Beest, Celian Bimbard and team.

Chronic #Neuropixels probes can record from the same neurons for days, but require new approaches for tracking neurons.

Enny and Celian developed UnitMatch, which operates after spike sorting and relies only on the neurons' average spike waveform.

They then validated the results with functional responses – which were remarkably stable!

biorxiv.org/content/10.1101/20 (1/2)

Cosyne abstract submissions are OPEN and will close Nov 19th!

This is the 21st Cosyne 😱 and the 20th anniversary 🤓 so make sure to submit and join us in Lisbon!!

🥳🧠 🥳🧠🥳🧠🥳🧠🥳
cosyne.org/abstracts-submissio

I'm happy to announce the start of a new free and open online course on neuroscience for people with a machine learning or similar background, co-developed by @marcusghosh. YouTube videos and Jupyter-based exercises will be released weekly. There is a Discord for discussions.

For more details about the structure of the course, and to watch the first video "Why neuroscience?" go straight to the course website:

neuro4ml.github.io

Currently available are videos for "week 0" and exercises for "week 1", but more coming soon.

Why did I create this course? Well, I think both neuroscience and ML can be enriched by knowing about each other and my feeling is that a general purpose intro to neuro or comp-neuro isn't the right way to inspire people in ML to be interested in neuro.

I hear a lot about neuroscience inspiring AI, but I think there's understandable scepticism about that from ML people. I don't want people to take neuro ideas and apply directly to ML, I just think we get a richer picture of what both fields are doing if we think more widely.

In other words, we should be thinking that we are somehow studying the same problem in different ways. You see that in the early history of the field, and it's very inspiring. (Yes, this is pretty much just saying that cognitive science is cool, but my scope is a bit narrower.)

The focus then is not on how neuroscientists think the brain works, but on the mechanisms the brain uses. These are strange, inspiring, and often their contribution to intelligent behaviour is still deeply mysterious.

The first video of the main part, on the structure of neurons, finishes with recent research (from @ilennaj and @kordinglab among others) on what the function of dendritic structure might be. No answers, just ideas.

And that's going to be another key part of this course. Research level problems are not hard to find in neuroscience, and the aim of this course is to empower students with the tools to start finding and working on them straight away.

Most of the exercises in the course won't have correct answers. They're starting points for further investigation. We'll be downloading and exploring open neuroscience datasets using methods from computational neuroscience and ML.

The course is not supposed to be comprehensive. It's a short course and the aim is more to get inspired and start on a longer road. I'd expect everyone to get something different out of it, and I'm happy if for some people their take home is "neuroscience is not for me"!

In some ways, it's the course I would have liked to get me into neuroscience and for my incoming PhD students from non-neuro backgrounds to be able to take. It's personal, and full of the sort of stuff that inspires me to be interested in neuroscience.

Well, I hope that some of you might be interested to follow along in the next few weeks, and since it's the first time I'm giving this course please do give feedback by email, Discord or however you like. Also, please feel free to re-use materials however you like.

#neuroscience #compneuro #machinelearning #ML

Our Perspective on reconstructing computational system dynamics from neural data finally out in Nature Rev Neurosci!
nature.com/articles/s41583-023

We survey generative models that can be trained on time series data to mimic the behavior of the underlying neural substrate.

leaving aside the specific data issues (improbable duplications) in this Sûdhof paper, the data have been shared and it may be an eye-opener for some people to see how the sausage is made in an electrophysiology paper. Yet another reason to require data sharing as standard policy.

pubpeer.com/publications/DAF32

A primer on deep-brain optical recording of neural dynamics during behavior. authors.elsevier.com/c/1htVZ3B

Detailed considerations and tradeoffs regarding deep-brain fluorescence recording techniques-- a comprehensive guide for all major steps involved, from project planning to data analysis.

On a different note, a cool paper: science.org/doi/10.1126/sciimm

The hygiene hypothesis has been a major idea floating for a long time. New study shows mice exposed to a more wild microbiome still develop allergies just the same.

What does having a diverse #Microbiome 🦠 actually do? Maybe a reason to focus more on what microbes are present (even in low abundance) and less on indices. #Immunology

A warm welcome to @biorxivpreprint who now have several accounts on biologists.social. You can follow them per subject category.

Find out more in their latest news post: connect.biorxiv.org/news/2023/

The reason p-values are still used is that the only coherent criticism on p-values is 'That is not the question you should be asking' and most scientists simply disagree with that proposition.

How does experience affect representational drift in the visual system? We provide new insights from the brilliant Joel Bauer, Uwe Lewin and @_eherbert as they study V1 through extensive longitudinal recording & modeling—highlighting the impact of experienced oriented contours!:

Tweeprint here:

x.com/Neuro_Joel/status/170620

Very happy to be part of a productive team effort of
Joel Bauer, Uwe Lewin, Julijana Gjorgjieva Carl Schoonover, Andrew Fink, Tobias Bonhoeffer and Mark Hübener!

Sharing new preprint from our lab!
@biorxivpreprint

Led by Paul LaFosse, we show neurons in the awake 🧠can filter out inputs: attenuation-by-suppression.

Also: real neurons’ activation function share features w/ #ai systems (eg ChatGPT).

Comments welcome!

Thread. #neuroscience #ai #NeuroAI #gpt #chatgpt 1/15
Preprint- biorxiv.org/content/10.1101/20

Wow! 124 brain researchers call out what journalists call the "leading theory of consciousness" (integrated information theory, IIT) as pseudoscience.

psyarxiv.com/zsr78/

💯​: We need testable theories about the brain to move forward. Every theory starts as a proto-theory (and that's fine). But when theories are not even wrong
en.wikipedia.org/wiki/Not_even
we must acknowledge that.

Especially when the stakes are as high as they are here, with big ethical implications (eg for organoids and coma patients, as the authors describe).

Introducing Neuropixels Ultra, a new probe with >10x site density: an implantable voltage camera capturing complete planar images of neurons' electrical fields in vivo! ⬆️ spike sorting yield, ⬆️ detection of small fields, and ⬆️ cell type identification.🧵
biorxiv.org/content/10.1101/20

Excited to share new work from the lab.

Temporal regularities shape perceptual decisions and striatal dopamine signals.

biorxiv.org/content/10.1101/20

Huge effort from Matthias Fritsche and co-authors Antara Majumdar, Lauren Strickland, Samuel Liebana Garcia and Rafal Bogacz. And big thanks to our funders @wellcometrust @hfsp

I don't have domain expertise to directly comment on the potential #LK99 room temperature superconductor, except to say that the experts that I have talked to are currently quite skeptical. But I can draw one analogy with my experience with mathematical research. A typical math research project consists of months of proposed attacks on a problem, resulting in all sorts of failures or partial successes, until enough experience and intuition is gained to locate the correct approach (or to realize that one needs to modify the problem, or work on a completely different project). However, when the time comes to write up the work, usually the failed or partial attempts are not mentioned at all, except perhaps as brief motivation for the final successful approach. This has some sense to it - a reader is likely to be more interested in the approach that worked than the approaches that didn't quite work - but can give the mistaken impression that good mathematics consists entirely of correct arguments, and that disclosing the failures one had to attempt before locating the correct approach is somehow shameful. But such failures are in fact enormously instructive, and I wish our culture was more open to sharing them.

With LK99, I have seen it reported that the initial announcements were released prematurely, while the research was still in the "partial success at best" stage. As such, the work fares poorly if judged by the usual standard of "successful, completed research", and criticism is due if one or more of the authors were presenting it as such. But as "research in progress, accidentally revealed to the public", I am inclined to be charitable, and wait for the sicence to play out.

Can you roll a ball with exactly enough energy to reach the top of a dome, and have it reach the top in a finite amount of time?

I'm going to idealize the hell out of this problem so we can easily study it using math. So: no friction, no air resistance... in fact, NONE of the sneaky stuff you're probably thinking about!

The problem is still tricky. For an ordinary dome the answer is *no*. If the ball has just enough energy to make it to the top, it rolls slower and slower as it gets near the top, in such a way that it never reaches the top.

But if the dome has a carefully chosen shape, the ball can reach the top in a finite time! This was pointed out by the philosopher John D. Norton, so it's called "Norton's dome".

For a full explanation go here:

sites.pitt.edu/~jdnorton/Goodi

Thanks to @SylviaFysica for pointing this out!

Norton was mainly interested in another freaky feature of his dome. Say you start with a ball at rest on top of the dome. Then there are many solutions of Newton's law

F = ma

In one the ball remains at rest on top of the dome. But in others, it starts to roll down the dome in some arbitrary direction! Moreover it can start rolling at any time.

If you change the shape of the dome ever so slightly, this probably won't work. It needs to be crafted with perfect accuracy. So this is basically just a mathematical curiosity.

Math folks will realize what's going on: not every first-order differential equation has a unique solution given its initial value. But Norton, being a philosopher of physics, manages to make this a lot more exciting than a typical textbook treatment of the Picard–Lindelöf theorem. 🙃

Here's the math:

en.wikipedia.org/wiki/Picard%E

Hi all! I'm a neuroscience postdoc studying information seeking and curiosity *in mice* in Richard Axel's lab at Columbia. I'll be on the academic job market this fall(!!!). Excited to be here.

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.