Can we train spiking neural networks with local rules that change the spike transmission speed? 🧵 New preprint: arxiv.org/abs/2211.08397 From the master thesis work of Jørgen Farner! Proud supervisor here 😊 @stenichele

We developed a new STDP-like delay learning rule for SNNs: Pre-synaptic spikes arriving at a post-synaptic neuron within a certain time window before a post-synaptic spike have their transmission delays adjusted to push future spikes closer to the average arrival time.

Is this local delay plasticity rule alone sufficient to train a network to perform a simple computational task? To answer this, we found suitable encoding and decoding methods for our approach and applied the resulting framework to a classification task.

As a proof-of-concept, we show that our network can learn to classify handwritten digits. We used three classes from the MNIST dataset: classes 0 and 1 were used for training and testing, and class 2 was included in the test phase but unseen during training.

Our networks showed the best performance improvement on the training digit classes when we applied a stricter 90% output matching threshold. With an 80% threshold, many networks were unable to separate the two classes.

With the lower 80% threshold, many networks can identify a third untrained digit class with accuracies upwards of 60%. Could we use this approach for generalized learning?

These are obviously *not* state-of-the-art classification accuracy levels. We're not trying to beat the 99%+ benchmark. But we know that coding with time opens up a whole world of possibilities beyond weight learning, and this is just the beginning:

Can we refine the framework with general rules for training times and matching thresholds to optimize performance? Does the generalizability hold when we introduce more input classes? Could this approach be applied in hardware with a suitable material? So much to explore!

It bears repeating that this work was led by Jørgen, my first master student as main supervisor, and I couldn't be prouder. What he accomplished in a year is astonishing. Thanks to co-supervisors
@stenichele (my excellent supervisor) and Ola for being such a great team!

I should also mention that this idea was inspired in part by conversations during my own master thesis work with
Paulo Aguiar at i3S when I dove into the (slightly tangential) rabbit hole of axonal computation during my literature search. Muito obrigada Paulo :)

For full disclosure: This paper was rejected from a NeurIPS workshop. We got some good feedback (thank you, reviewers & organizers!), and many of the comments were about details we left out due to page limit. So stay tuned for a longer version! Feedback/ideas welcome :)

Twitter thread here: twitter.com/KrisHeiney/status/

Still working out a proper Twitter/Mastodon crossposting method (feedback on that also welcome!)

Show thread
Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.