Skip to content

Commit

Permalink
almost final version before preprint
Browse files Browse the repository at this point in the history
  • Loading branch information
thesamovar committed Jul 16, 2024
1 parent d4ddd6b commit 3a33574
Show file tree
Hide file tree
Showing 5 changed files with 47 additions and 59 deletions.
2 changes: 1 addition & 1 deletion paper/paper.md
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ downloads:


+++ {"part": "abstract"}
Neuroscientists are increasingly initiating large-scale collaborations which bring together tens to hundreds of researchers. However, while these projects represent a step-change in scale, they retain a traditional structure with centralised funding, participating laboratories and data sharing on publication. Inspired by an open-source project in pure mathematics, we set out to test the feasibility of an alternative structure by running a grassroots, massively collaborative project in computational neuroscience. To do so, we launched a public Git repository, with code for training spiking neural networks to solve a sound localisation task via surrogate gradient descent. We then invited anyone, anywhere to use this code as a springboard for exploring questions of interest to them, and encouraged participants to share their work both asynchronously through Git and synchronously at monthly online workshops. At a scientific level, our work investigated how a range of biologically-relevant parameters, from time delays to membrane time constants and levels of inhibition, could impact sound localisation in networks of spiking units. At a more macro-level, our project brought together 35 researchers from multiple countries, provided hands-on research experience to early career participants, and opportunities for supervision and teaching to later career participants. Looking ahead, our project provides a glimpse of what open, collaborative science could look like and provides a necessary, tentative step towards it.
Neuroscientists are increasingly initiating large-scale collaborations which bring together tens to hundreds of researchers. However, while these projects represent a step-change in scale, they retain a traditional structure with centralised funding, participating laboratories and data sharing on publication. Inspired by an open-source project in pure mathematics, we set out to test the feasibility of an alternative structure by running a grassroots, massively collaborative project in computational neuroscience. To do so, we launched a public Git repository, with code for training spiking neural networks to solve a sound localisation task via surrogate gradient descent. We then invited anyone, anywhere to use this code as a springboard for exploring questions of interest to them, and encouraged participants to share their work both asynchronously through Git and synchronously at monthly online workshops. At a scientific level, our work investigated how a range of biologically-relevant parameters, from time delays to membrane time constants and levels of inhibition, could impact sound localisation in networks of spiking units. At a more macro-level, our project brought together 31 researchers from multiple countries, provided hands-on research experience to early career participants, and opportunities for supervision and teaching to later career participants. Looking ahead, our project provides a glimpse of what open, collaborative science could look like and provides a necessary, tentative step towards it.
+++

# Introduction
Expand Down
90 changes: 39 additions & 51 deletions paper/sections/contributor_table.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
(contributor-table)=
```{list-table} Contributors, ordered by GitHub commits.
```{list-table} Contributors, ordered by GitHub commits as of 2024-07-16.
:header-rows: 1
:label: contributors-table
Expand All @@ -9,27 +9,36 @@
* - [Tomas Fiers](https://tomasfiers.net/)
- [\@tfiers](https://github.com/tfiers)
- Built the website infrastructure.
* - [Marcus Ghosh](https://neural-reckoning.org/marcus_ghosh.html)
- [\@ghoshm](https://github.com/ghoshm)
- Managed the project, wrote the paper, conducted research ([](../research/4-Quick_Start.ipynb), [](../research/Dales_law.ipynb)), gave the Cosyne tutorial.
* - [Dan Goodman](http://neural-reckoning.org/)
- [\@thesamovar](https://github.com/thesamovar)
- Conceived the project, wrote the paper, wrote and recorded the Cosyne tutorial. Conducted research ([](../research/3-Starting-Notebook.ipynb), [](../research/time-constant-solutions.ipynb)).
* - [Marcus Ghosh](https://neural-reckoning.org/marcus_ghosh.html)
- [\@ghoshm](https://github.com/ghoshm)
- Managed the project, wrote the paper, conducted research ([](../research/4-Quick_Start.ipynb), [](../research/Dales_law.ipynb)), gave the Cosyne tutorial.
* - Francesco De Santis
- [\@francescodesantis](https://github.com/francescodesantis)
- (Conducted research ([](../research/new_inh_model.ipynb)) and wrote the paper ([](#inhib-model)))
* - [Dilay Fidan Erçelik](https://dilayercelik.github.io/)
- [\@dilayercelik](https://github.com/dilayercelik)
- Conducted research ([](../research/4-Quick_Start.ipynb), [](../research/Quick_Start_250HzClassification_CleanVersion.ipynb)).
* - Pietro Monticone
- [\@pitmonticone](https://github.com/pitmonticone)
- Cleaned paper and notebooks
* - Karim Habashy
- [\@KarimHabashy](https://github.com/KarimHabashy)
- Conducted research ([](../research/Learning_delays.ipynb), [](../research/Learning_delays_major_edit2.ipynb), [](../research/Solving_problem_with_delay_learning.ipynb)), wrote the paper ([](#delay-section)), project management ([](../research/4-Quick_Start.ipynb))
* - Balázs Mészáros
- [\@mbalazs98](https://github.com/mbalazs98)
- Wrote the paper (DCLS based delay learning in the appendix). Conducted research ([](../research/Quick_Start_random.ipynb), [](../research/Quick_Start_Delay_DCLS.ipynb)).
* - Mingxuan Hong
- [\@mxhong](https://github.com/mxhong)
- Conducted research ([](../research/Altering_output_neurons.ipynb), [](../research/Dynamic_threshold.ipynb)).
* - [Dilay Fidan Erçelik](https://dilayercelik.github.io/)
- [\@dilayercelik](https://github.com/dilayercelik)
- Conducted research ([](../research/4-Quick_Start.ipynb), [](../research/Quick_Start_250HzClassification_CleanVersion.ipynb)).
* - [Rory Byrne](https://rory.bio/)
- [\@rorybyrne](https://github.com/rorybyrne)
- Organised the source code structure, conducted research ([](../research/Optimizing-Membrane-Time-Constant.ipynb)).
* - Sara Evers
- [\@saraevers](https://github.com/saraevers)
- Conducted research ([](../research/IE-neuron-distribution.ipynb)).
* - [Zach Friedenberger](https://zachfriedenberger.github.io/)
- [\@ZachFriedenberger](https://github.com/ZachFriedenberger)
- Conducted research ([](../research/Optimizing-Membrane-Time-Constant.ipynb)).
Expand All @@ -42,73 +51,52 @@
* - (Unknown)
- [\@a-dtk](https://github.com/a-dtk)
- Conducted research ([](../research/Noise_robustness.ipynb)).
* - Sara Evers
- [\@saraevers](https://github.com/saraevers)
- Conducted research ([](../research/IE-neuron-distribution.ipynb)).
* - Ido Aizenbud
- [\@ido4848](https://github.com/ido4848)
- Conducted research ([](../research/Alt-Filter-and-Fire_Neuron_Model_SNN.ipynb)).
* - Balázs Mészáros
- [\@mbalazs98](https://github.com/mbalazs98)
- Wrote the paper (DCLS based delay learning in the appendix). Conducted research ([](../research/Quick_Start_random.ipynb), [](../research/Quick_Start_Delay_DCLS.ipynb)).
* - Sebastian Schmitt
- [\@schmitts](https://github.com/schmitts)
- Conducted research (background on neuromorphic hardware in [](../research/Background.md)).
- Conducted research (background on neuromorphic hardware in [](../research/1-Background.md)).
* - [Rowan Cockett](http://row1.ca/)
- [\@rowanc1](https://github.com/rowanc1)
- MyST technical support
* - [Jakub Smékal](https://jakubsmekal.com/)
- [\@smejak](https://github.com/smejak)
- (TODO)
* - [Alberto Antonietti](https://www.deib.polimi.it/eng/people/details/669646)
- [\@alberto-antonietti](https://github.com/alberto-antonietti)
- Supervised Francesco De Santis, wrote the paper ([](#inhib-model)).
* - Lavínia Takarabe
- [\@laviniamitiko](https://github.com/laviniamitiko)
- (TODO)
* - Danish Shaikh
- [\@danishbizkit](https://github.com/danishbizkit)
- (TODO)
* - ???
* - Juan Luis Riquelme
- [\@luis-rr](https://github.com/luis-rr)
- (TODO)
* - Pietro Monticone
- [\@pitmonticone](https://github.com/pitmonticone)
- Cleaned paper and notebooks
- Conducted research ([](../research/Excitatory-only-localisation.ipynb))
* - [Adam Haber](http://adamhaber.github.io/)
- [\@adamhaber](https://github.com/adamhaber)
- (TODO)
- Conducted research ([](../research/Compute-hessians-jax-version.ipynb))
* - [Gabriel Béna](https://neural-reckoning.org/gabriel_bena.html)
- [\@GabrielBena](https://github.com/GabrielBena)
- Conducted research ([](../research/Analysing-Trained-Networks-Part2.ipynb), [](../research/Dales_law.ipynb)).
* - Divyansh Gupta
- [\@guptadivyansh](https://github.com/guptadivyansh)
- (TODO)
* - Gabryel Mason-Williams (UK undergrad)
- ???
- Conducted research ([](../research/Analysing-Trained-Networks-Part2.ipynb)).
* - Josh Bourne (UK MSc student)
- ???
- Conducted research ([](../research/Analysing-Trained-Networks-Part2.ipynb)).
* - Zekai Xu (UK MSc student)
- ???
- Conducted research ([](../research/Analysing-Trained-Networks-Part2.ipynb)).
* - Leonidas Richter (Germany, PhD)
- ???
- Conducted research ([](../research/Learning_delays.ipynb)).
* - Chen Li (UK MSc)
- ???
- Conducted research ([](../research/Optimizing-Membrane-Time-Constant.ipynb)).
* - Peter Crowe (Germany, Undergraduate)
* - Peter Crowe
- [\@pfcrowe](https://github.com/pfcrowe)
- Conducted research ([](../research/Optimizing-Membrane-Time-Constant.ipynb)).
* - Umar Abubacar
- [\@UmarAbubacar](https://github.com/UmarAbubacar)
- Conducted research ([](../research/TCA-analysis.ipynb)) and wrote the paper ([](#tca-section)).
* - Gabryel Mason-Williams
- None/unknown
- Conducted research ([](../research/Analysing-Trained-Networks-Part2.ipynb)).
* - Josh Bourne
- None/unknown
- Conducted research ([](../research/Analysing-Trained-Networks-Part2.ipynb)).
* - Zekai Xu
- None/unknown
- Conducted research ([](../research/Analysing-Trained-Networks-Part2.ipynb)).
* - Leonidas Richter
- None/unknown
- Conducted research ([](../research/Learning_delays.ipynb)).
* - Chen Li
- None/unknown
- Conducted research ([](../research/Optimizing-Membrane-Time-Constant.ipynb)).
* - [Brendan Bicknell]()
- ???
- None/unknown
- Supervised Dilay Fidan Erçelik.
* - [Volker Bormuth](https://www.labojeanperrin.fr/?people40)
- ???
- None/unknown
- Developed teaching materials and used the project to teach two university courses. Supervised Marcus Ghosh & students at Sorbonne University.
```
```
2 changes: 1 addition & 1 deletion paper/sections/discussion.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,4 +17,4 @@ Ultimately, while the project explored many interesting directions, which will f

## Conclusions

This paper does not present a scientific breakthrough. However, it does demonstrate the feasibility of open research projects: which bring together large number of participants across countries and career stages to work together collaboratively on scientific projects. Moreover, several follow-up research projects are planned based on pilot data from our work, and we plan to launch a second COMOB project soon.
This paper does not present a scientific breakthrough. However, it does demonstrate the feasibility of open research projects which bring together large number of participants across countries and career stages to work together collaboratively on scientific projects. Moreover, several follow-up research projects are planned based on pilot data from our work, and we plan to launch a second COMOB project soon.
8 changes: 4 additions & 4 deletions paper/sections/meta_science.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,11 +26,11 @@ For those interested in pursuing a similar project our repository can easily be

(teaching-section)=
## Teaching with this framework
As our code uses spiking neurons to transform sensory inputs into behavioural outputs, it forms an excellent basis for teaching, as concepts from across neuroscience can be introduced and then implemented in class. With this in mind we integrated our project into a physics MSc course on biophysics and neural circuits. Working individually or in pairs, students actively engaged by adjusting network parameters and modifying the provided code to test their own hypotheses. Later, brief progress report presentations stimulated dynamic discussions in class, as all students, while working on the same project and code, pursued different hypotheses. This setup naturally piqued interest in their peers’ presentations, enhanced their understanding of various project applications, and facilitated collaborative learning.
This project emerged from a tutorial, and the code remains well suited for teaching several concepts from across neuroscience. We integrated our project into a Physics MSc course on Biophysics and Neural Circuits. Working individually or in pairs, students actively engaged by adjusting network parameters and modifying the provided code to test their own hypotheses. Later, brief progress report presentations stimulated dynamic discussions in class, as all students, while working on the same project and code, pursued different hypotheses. We found that this setup naturally piqued interest in their peers’ presentations, enhanced their understanding of various project applications, and facilitated collaborative learning. It allowed for engagement from students at a range of skill levels and with diverse interests, and helped bridge the gap between teaching and research.

The project’s stochastic outcomes necessitated substantial statistical analysis, adding an experimental dimension that made the project outcome less deterministic and, thus, more engaging than standard step-wise exercises. However, the project does not demand complex programming nor deep mathematical understandings of neural networks, and so allows practical exploration of neural network applications appropriate for various student levels. This adaptability allowed students of varying skill levels to progress at their own pace. Moreover, the open-ended nature of the project allowed the use of generative AI tools, enabling students to overcome coding challenges and deepen their understanding of the provided code and underlying machine learning concepts, thereby enhancing their learning curve and engagement.
% The project’s stochastic outcomes necessitated substantial statistical analysis, adding an experimental dimension that made the project outcome less deterministic and, thus, more engaging than standard step-wise exercises. However, the project does not demand complex programming nor deep mathematical understandings of neural networks, and so allows practical exploration of neural network applications appropriate for various student levels. This adaptability allowed students of varying skill levels to progress at their own pace. Moreover, the open-ended nature of the project allowed the use of generative AI tools, enabling students to overcome coding challenges and deepen their understanding of the provided code and underlying machine learning concepts, thereby enhancing their learning curve and engagement.

Working on a real research project not only sustained interest and demonstrated real-world impact but also provided additional inspiration through the accessible contributions of all project participants. This educational initiative thus successfully bridged the gap between teaching and research, with student feedback highlighting its effectiveness in enhancing both theoretical and practical knowledge. The desire for more time to delve deeper into the projects indicated its strength in engaging students and sparking their interest.
% Working on a real research project not only sustained interest and demonstrated real-world impact but also provided additional inspiration through the accessible contributions of all project participants. This educational initiative thus successfully bridged the gap between teaching and research, with student feedback highlighting its effectiveness in enhancing both theoretical and practical knowledge. The desire for more time to delve deeper into the projects indicated its strength in engaging students and sparking their interest.

In sum, this framework's multidisciplinary nature makes it versatile in various teaching contexts, and suited to discussing both machine learning concepts and open challenges in neuroscience, such as how to decipher brain circuits with recording tools and experimental manipulations like optogenetics.
% In sum, this framework's multidisciplinary nature makes it versatile in various teaching contexts, and suited to discussing both machine learning concepts and open challenges in neuroscience, such as how to decipher brain circuits with recording tools and experimental manipulations like optogenetics.
% For those interested in teaching with this framework, we have provided slides and a highly annotated introductory Python notebook [here]().
4 changes: 2 additions & 2 deletions paper/sections/science.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
## Introduction

In the [Cosyne tutorial](https://neural-reckoning.github.io/cosyne-tutorial-2022/) {cite:p}`10.5281/zenodo.7044500` on spiking neural networks (SNNs) that launched this project, we used a sound localisation task. Reasoning that sound localisation requires the precise temporal processing of spikes at which these networks would excel.
In the [Cosyne tutorial](https://neural-reckoning.github.io/cosyne-tutorial-2022/) {cite:p}`10.5281/zenodo.7044500` on spiking neural networks (SNNs) that launched this project, we used a sound localisation task. We reasoned that sound localisation requires the precise temporal processing of spikes at which these networks would excel.

Animals localise sounds by detecting location- or direction-specific cues in the signals that arrive at their ears. Some of the most important sources of cues (although not the only ones) come from differences in the signals between two ears, including both level and timing differences. Respectively, termed interaural level difference (ILD) and interaural timing difference (ITD). In some cases humans are able to detect arrival time differences as small as 20 $\mu$s.

Expand All @@ -24,7 +24,7 @@ The input neurons are all-to-all connected to the layer of hidden neurons via a

Using this setup, we successfully trained SNNs on this task, and found that accuracy increased as we reduced the membrane time constant of the units in the hidden layer ([](../research/Optimizing-Membrane-Time-Constant.ipynb)). This initially suggested that coincidence detection played an important role. However, further analysis in [](../../research/time-constant-solutions.ipynb) (described in more detail in [](#basic-model)) showed that in fact, the network was not using a coincidence detection strategy, or indeed a spike timing strategy. Rather, it appears to be using an approach similar to the equalisation-cancellation theory {cite:p}`durlach_equalization_1963;culling_equalization_cancellation_2020` by subtracting various pairs of signals to find the point where they approximately cancel. Careful analysis of the trained model showed that it could be extremely well approximated by a 6-parameter model that is quite easy to describe, but does not obviously correspond to any known features of the auditory system.

As an alternative approach, we also used Tensor Component Analysis (TCA) {cite:p}`Williams2018` to explore the spiking activity of this model, and to compare it across multiple trained networks [](#tca-section).
As an alternative approach, we also used Tensor Component Analysis (TCA) {cite:p}`Williams2018` to explore the spiking activity of this model, and to compare it across multiple trained networks ([](#tca-section)).

Building on this base model, we explored two main questions: how changing the neuron model alters the network's behaviour and how the phase delays (within each ear) can be learned.

Expand Down

0 comments on commit 3a33574

Please sign in to comment.