Some personal update: I will join the University of Waterloo as Assistant Professor and Vector Institute as Faculty Member in 2024! I am *very* excited to be back in Canada to help grow the Canadian AI ecosystem! Please apply if you are interested in a PhD at the intersection of NLP and ML!

Communication consumes 35 times more energy than computation in the human cortex

The brain is the hungriest organ in the body (using 20% of all energy consumed). But most of it is not used for "computing"; it's used to send messages around.

pnas.org/doi/full/10.1073/pnas

Thanks to all our fantastic contributors and mentors who supported this work at every stage!

Olivier Codol is the first author (not on mastodon), and thanks to Mehrdad Kashefi, @andpru and @paulgribble

Show thread

Finally, MotorNet provides a framework that can easily be expanded to more complex control scenarios. The only limit is your imagination!

Show thread

New, complex tasks can be implemented, trained, and visualized quickly, speeding up the research cycle and providing tools that can be used by other researchers in the community

Show thread

To get started quickly — do a 'pip install motornet', check out the many tutorials included in the repo, or even open a tutorial directly in a colab notebook with a single click

oliviercodol.github.io/MotorNe

Show thread

In the preprint, we lay out the structure of the toolbox, show a few examples of some classic motor control tasks, and replications of some of our favourite modeling work

Show thread

MotorNet is an open-source python toolbox built on Tensorflow that makes training neural networks to control realistic biomechanical models fast and accessible to non-experts, enabling teams to focus on concepts and ideas over implementation.

oliviercodol.github.io/MotorNe

Show thread

When we set out to study how neural networks interact with biomechanical models, we found that separate platforms are needed for neural and biomechanical modeling, and that existing biomechanical models are not differentiable — making training slow or unreliable

Show thread

Modeling motor control typically requires stitching together multiple neural and biomechanical modeling frameworks.

So, we created MotorNet — a toolbox to study neural architectures/learning, muscle dynamics, delays, noise, and tasks, all under one roof!

biorxiv.org/content/10.1101/20

MotorNet: a Python toolbox for controlling differentiable biomechanical effectors with artificial neural networks

Artificial neural networks (ANNs) are a powerful class of computational models for unravelling neural mechanisms of brain function. However, for neural control of movement, they currently must be integrated with software simulating biomechanical effectors, leading to limiting impracticalities: (1) researchers must rely on two different platforms and (2) biomechanical effectors are not generally differentiable, constraining researchers to reinforcement learning algorithms despite the existence and potential biological relevance of faster training methods. To address these limitations, we developed MotorNet, an open-source Python toolbox for creating arbitrarily complex, differentiable, and biomechanically realistic effectors that can be trained on user-defined motor tasks using ANNs. MotorNet is designed to meet several goals: ease of installation, ease of use, a high-level user-friendly API, and a modular architecture to allow for flexibility in model building. MotorNet requires no dependencies outside Python, making it easy to get started with. For instance, it allows training ANNs on typically used motor control models such as a two joint, six muscle, planar arm within minutes on a typical desktop computer. MotorNet is built on TensorFlow and therefore can implement any network architecture that is possible using the TensorFlow framework. Consequently, it will immediately benefit from advances in artificial intelligence through TensorFlow updates. Finally, it is open source, enabling users to create and share their own improvements, such as new effector and network architectures or custom task designs. MotorNet's focus on higher order model and task design will alleviate overhead cost to initiate computational projects for new researchers by providing a standalone, ready-to-go framework, and speed up efforts of established computational teams by enabling a focus on concepts and ideas over implementation. ### Competing Interest Statement The authors have declared no competing interest.

www.biorxiv.org

Hello! I just migrated my account to the neuromatch server, time to reintroduce! #introduction

I'm Crystal, I'm a neuroscientist interested in visual development. I work for the NIH BRAIN Initiative. I enjoy exploring science and art through quilting, crafting, 3D printing. Once I 3D printed my own brain and it's white matter.

I can also be found in the woods with my two dogs, foraging for mushrooms.

I've always found poor overall quality of research produced by honest actors to be a bigger problem than outright academic fraud. Somehow the latter never seems interesting or surprising to me whereas the former points out to serious systemic problems in scientific formation. How do we reinstitute rigorous methodological training, genuine curiosity, deep theoretical thinking, programmatic and systematic effort, careful execution in scientific practice? Seems to be the harder problem to solve.

More measurements thoughts, control over whether and how "ability data" gets used is some real power. For example, there are big gender effects in how hiring managers look at grades. Grades aren't just taken as equal across all people and they're tangled up in stereotypes. I'll never forget the convo I had with an engineering manager who said he wanted a "B plus guy" over a "A plus girl" because "it's not sexism, I just want people who deal with the real world." Classic bias WITH measurements.

Consciousness: Matter or EMF?
frontiersin.org/articles/10.33

After all, you can read information from the brain's electric fields:
doi.org/10.1016/j.neuroimage.2

And ephaptic coupling can drive spiking:
doi.org/10.1101/2023.01.17.524

#neuroscience #consciousness

Consciousness: Matter or EMF?

Conventional theories of consciousness (ToCs) that assume that the substrate of consciousness is the brain's neuronal matter fail to account for fundamental features of consciousness, such as the binding problem. Field ToC's propose that the substrate of consciousness is the brain's best accounted by some kind of field in the brain. Electromagnetic (EM) ToCs propose that the conscious field is the brain's well-known EM field. EM-ToCs were first proposed only around 20 years ago primarily to account for the experimental discovery that synchronous neuronal firing was the strongest neural correlate of consciousness (NCC). Although EM-ToCs are gaining increasing support, they remain controversial and are often ignored by neurobiologists and philosophers and passed over in most published reviews of consciousness. In this review I examine EM-ToCs against established criteria for distinguishing between ToCs and demonstrate that they outperform all conventional ToCs and provide novel insights into the nature of consciousness as well as a feasible route toward building artificial consciousnesses.

www.frontiersin.org

Hebbian deep learning!

"an algorithm that trains deep neural networks, without any feedback, target, or error signals. As a result, it achieves efficiency by avoiding weight transport, non-local plasticity, time-locking of layer updates, iterative equilibria, and (self-) supervisory or other feedback signals (...) Its increased efficiency and biological compatibility do not trade off accuracy compared to state-of-the-art bio-plausible learning, but rather improve it."

arxiv.org/abs/2209.11883

Hebbian Deep Learning Without Feedback

Recent approximations to backpropagation (BP) have mitigated many of BP's computational inefficiencies and incompatibilities with biology, but important limitations still remain. Moreover, the approximations significantly decrease accuracy in benchmarks, suggesting that an entirely different approach may be more fruitful. Here, grounded on recent theory for Hebbian learning in soft winner-take-all networks, we present multilayer SoftHebb, i.e. an algorithm that trains deep neural networks, without any feedback, target, or error signals. As a result, it achieves efficiency by avoiding weight transport, non-local plasticity, time-locking of layer updates, iterative equilibria, and (self-) supervisory or other feedback signals -- which were necessary in other approaches. Its increased efficiency and biological compatibility do not trade off accuracy compared to state-of-the-art bio-plausible learning, but rather improve it. With up to five hidden layers and an added linear classifier, accuracies on MNIST, CIFAR-10, STL-10, and ImageNet, respectively reach 99.4%, 80.3%, 76.2%, and 27.3%. In conclusion, SoftHebb shows with a radically different approach from BP that Deep Learning over few layers may be plausible in the brain and increases the accuracy of bio-plausible machine learning.

arxiv.org

Super excited to share our new work showing that recurrent feedback from hippocampal replays to PFC can implement a form of planning that matches human behavior in a sequential decision making task!

biorxiv.org/content/10.1101/20 with Guillaume Hennequin and Marcelo Mattar (sadly not on Mastodon yet!)

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.