Skip to content

Commit

Permalink
Delay learning section changes (#141)
Browse files Browse the repository at this point in the history
  • Loading branch information
mbalazs98 authored Jul 10, 2024
1 parent 4a601d6 commit cdde916
Showing 1 changed file with 4 additions and 4 deletions.
8 changes: 4 additions & 4 deletions paper/sections/science.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,11 +46,11 @@ Unlike most point neuron models, in which pairs are connected by a single weight

(overview-delays)=
## Learning delays
As in our base model, many studies which incorporate axonal and/or dendritic delays include them as non-learnable parameters. Here we explore how these phase delays can be learned through two approaches.
As in our base model, many studies incorporate axonal and/or dendritic delays as non-learnable parameters. Here, we explore how these phase delays, as well as synaptic delays, can be learned through two approaches.

The first method was to use dilated convolutions with learnable spacings (DCLS) [@hassani2023dilated;@hammouamri2024learning]. This method uses a 1D convolution through time to simulate delays between consecutive layers. The kernels include a single non-zero weight per-synapse, which corresponds to the desired delay. This method can learn both weights and delays.
The first method was to develop a differentiable delay layer (DDL). This method uses a combination of translation and interpolation, where the interpolation allows the delays to be differentiable even though time steps are discrete. This can be placed between any two layers in a spiking neural network, and is capable of solving the task without weight training. This work is described in more detail in [](#delay-section).

The second method was to develop a differentiable delay layer (DDL). This method uses a combination of translation and interpolation, where the interpolation allows the delays to be differentiable even though time steps are discrete. This can be placed between any two layers in a spiking neural network, and also allows weights and delays to be trained separately. This work is described in more detail in [](#delay-section).
While we were developing our DDL-based method, a paper introducing synaptic delays using dilated convolutions with learnable spacings (DCLS) was published [@hassani2023dilated;@hammouamri2024learning], prompting us to explore this approach as well. This method also relies on interpolation and is very similar to the DDL method, serving as a generalization for synaptic delays. It uses a 1D convolution through time to simulate delays between consecutive layers. The kernels include a single non-zero weight per synapse, which corresponds to the desired delay. This method co-trains weights and delays.

We found that both methods performed well and eliminated the artificial phase delays introduced in the basic model.

Expand All @@ -60,4 +60,4 @@ Finally, we developed a more detailed model in which we used over 170,000 units,

In short, input spectrograms representing sounds at azimuth angles from -90° to +90° were converted into spikes, then passed forward to populations representing the globular and spherical bushy cells, and subsequently the lateral and medial superior olivary nuclei, from which we readout sound source angle predictions. Note that, unlike the work with our base model, we used no learnable parameters in this model, and instead based parameters on neurophysiological data. For example, the MSO units had excitatory inputs from both the ipsi and contralateral SBCs and dominant inhibition from contralateral GBCs.

The model generated realistic tuning curves for lateral and medial superior olive (LSO and MSO) neurons. Moreover, removing inhibition shifted ITD sensitivity to the midline, as in [@Brand2002;@Pecka2008].
The model generated realistic tuning curves for lateral and medial superior olive (LSO and MSO) neurons. Moreover, removing inhibition shifted ITD sensitivity to the midline, as in [@Brand2002;@Pecka2008].

0 comments on commit cdde916

Please sign in to comment.