These are obviously *not* state-of-the-art classification accuracy levels. We're not trying to beat the 99%+ benchmark. But we know that coding with time opens up a whole world of possibilities beyond weight learning, and this is just the beginning:

Can we refine the framework with general rules for training times and matching thresholds to optimize performance? Does the generalizability hold when we introduce more input classes? Could this approach be applied in hardware with a suitable material? So much to explore!

It bears repeating that this work was led by Jørgen, my first master student as main supervisor, and I couldn't be prouder. What he accomplished in a year is astonishing. Thanks to co-supervisors
@stenichele (my excellent supervisor) and Ola for being such a great team!

I should also mention that this idea was inspired in part by conversations during my own master thesis work with
Paulo Aguiar at i3S when I dove into the (slightly tangential) rabbit hole of axonal computation during my literature search. Muito obrigada Paulo :)

For full disclosure: This paper was rejected from a NeurIPS workshop. We got some good feedback (thank you, reviewers & organizers!), and many of the comments were about details we left out due to page limit. So stay tuned for a longer version! Feedback/ideas welcome :)

Twitter thread here: twitter.com/KrisHeiney/status/

Still working out a proper Twitter/Mastodon crossposting method (feedback on that also welcome!)

Show thread

As a proof-of-concept, we show that our network can learn to classify handwritten digits. We used three classes from the MNIST dataset: classes 0 and 1 were used for training and testing, and class 2 was included in the test phase but unseen during training.

Our networks showed the best performance improvement on the training digit classes when we applied a stricter 90% output matching threshold. With an 80% threshold, many networks were unable to separate the two classes.

With the lower 80% threshold, many networks can identify a third untrained digit class with accuracies upwards of 60%. Could we use this approach for generalized learning?

Show thread

Can we train spiking neural networks with local rules that change the spike transmission speed? 🧵 New preprint: arxiv.org/abs/2211.08397 From the master thesis work of Jørgen Farner! Proud supervisor here 😊 @stenichele

We developed a new STDP-like delay learning rule for SNNs: Pre-synaptic spikes arriving at a post-synaptic neuron within a certain time window before a post-synaptic spike have their transmission delays adjusted to push future spikes closer to the average arrival time.

Is this local delay plasticity rule alone sufficient to train a network to perform a simple computational task? To answer this, we found suitable encoding and decoding methods for our approach and applied the resulting framework to a classification task.

. I'm a theoretical neuroscientist at U Mainz Medical Center and co-affiliated with U Bonn Medical center. Primary focus on cortical circuits, their network activity, synaptic plasticity and protein dynamics in dendrites. Broadly interested how circuits learn e.g. neuro/AI interface and want to understand how neural networks compute, both algorithmically and intracellularly. Occational posts about societal and academic issues.

#introduction time! 🐘

Hi all! I'm Ryan, a postdoc in Siegert lab at ISTA, Austria.

I'm interested in developing computational approaches to understand the interaction between #microglia and its local environment, be it through morphology, transcriptome or the activity from surrounding neurons.

I'm a physicist by training but love being in the interface with biology and applied math. Let's get in touch!

time! I’m a computational neuroscientist wrapping up my PhD in Comp Sci at OsloMet & NTNU. I’m interested in neural computation at the population level 🧠 and studying this through data analysis and data-driven modeling 📈 Projects include: 1. Relating neural correlations to representational drift 🐁 2. Studying information propagation in vitro using neuronal avalanches and functional connectivity 🏔 3. Developing local delay plasticity models for SNN training ⚡️

Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.