Digital Repository

Supervised spike time learning with an adaptive learning rate in spiking neural networks

Show simple item record

dc.contributor.advisor Ramaswamy, Venkatakrishnan
dc.contributor.author V, VAISHNAVI
dc.date.accessioned 2023-05-19T10:56:50Z
dc.date.available 2023-05-19T10:56:50Z
dc.date.issued 2023-05
dc.identifier.citation 49 en_US
dc.identifier.uri http://dr.iiserpune.ac.in:8080/xmlui/handle/123456789/7944
dc.description.abstract Reliable communication of neuronal information by neurons of the central nervous system to its downstream neurons involves transformation of input spike trains to specific output spike trains. The spike train to spike train transformation problem has been addressed by numerous studies in the past but we focus our attention on the synaptic weight update rule proposed in Banerjee(2016) which aligns two spike trains using only the spike time disparities. We implement the synaptic weight update rule on a single neuron receiving multiple synaptic inputs and re-evaluate the results of Banerjee(2016). We identify the problems that are faced during implementation of the rule and suggest methods to address these problems. During implementation, we identified that learning slows down due to silent synapses or (synapses whose weights do not change much) or quiescent neurons and manual tuning of hyperparameters - learning rate and cap on update vector. The first problem is difficult to solve but we suggest a potential solution to the problem in the Discussion section. The problem due to a fixed learning rate and update vector cap is solved by using gradient descent with momentum and other adaptive gradient based optimisers - AdaGrad, RMSProp and Adam. The choice of optimiser is very important especially when dealing with sparse gradient tasks and large spiking neural networks because optimisers take into account the characteristics of the data and assign a per-parameter learning rate and accelerate the learning process. Out of gradient descent with momentum and other three optimisers used, Adam performed remarkably well in converging the weights of the learning neuron towards the target weights, which is used as a measure of effectiveness of the learning rule. en_US
dc.language.iso en en_US
dc.subject Computational neuroscience en_US
dc.subject Spiking Neural Networks en_US
dc.title Supervised spike time learning with an adaptive learning rate in spiking neural networks en_US
dc.type Thesis en_US
dc.description.embargo One Year en_US
dc.type.degree BS-MS en_US
dc.contributor.department Dept. of Biology en_US
dc.contributor.registration 20181194 en_US


Files in this item

This item appears in the following Collection(s)

  • MS THESES [1705]
    Thesis submitted to IISER Pune in partial fulfilment of the requirements for the BS-MS Dual Degree Programme/MSc. Programme/MS-Exit Programme

Show simple item record

Search Repository


Advanced Search

Browse

My Account