Please use this identifier to cite or link to this item: http://dr.iiserpune.ac.in:8080/xmlui/handle/123456789/7944
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorRamaswamy, Venkatakrishnan-
dc.contributor.authorV, VAISHNAVI-
dc.date.accessioned2023-05-19T10:56:50Z-
dc.date.available2023-05-19T10:56:50Z-
dc.date.issued2023-05-
dc.identifier.citation49en_US
dc.identifier.urihttp://dr.iiserpune.ac.in:8080/xmlui/handle/123456789/7944-
dc.description.abstractReliable communication of neuronal information by neurons of the central nervous system to its downstream neurons involves transformation of input spike trains to specific output spike trains. The spike train to spike train transformation problem has been addressed by numerous studies in the past but we focus our attention on the synaptic weight update rule proposed in Banerjee(2016) which aligns two spike trains using only the spike time disparities. We implement the synaptic weight update rule on a single neuron receiving multiple synaptic inputs and re-evaluate the results of Banerjee(2016). We identify the problems that are faced during implementation of the rule and suggest methods to address these problems. During implementation, we identified that learning slows down due to silent synapses or (synapses whose weights do not change much) or quiescent neurons and manual tuning of hyperparameters - learning rate and cap on update vector. The first problem is difficult to solve but we suggest a potential solution to the problem in the Discussion section. The problem due to a fixed learning rate and update vector cap is solved by using gradient descent with momentum and other adaptive gradient based optimisers - AdaGrad, RMSProp and Adam. The choice of optimiser is very important especially when dealing with sparse gradient tasks and large spiking neural networks because optimisers take into account the characteristics of the data and assign a per-parameter learning rate and accelerate the learning process. Out of gradient descent with momentum and other three optimisers used, Adam performed remarkably well in converging the weights of the learning neuron towards the target weights, which is used as a measure of effectiveness of the learning rule.en_US
dc.language.isoenen_US
dc.subjectComputational neuroscienceen_US
dc.subjectSpiking Neural Networksen_US
dc.titleSupervised spike time learning with an adaptive learning rate in spiking neural networksen_US
dc.typeThesisen_US
dc.description.embargoOne Yearen_US
dc.type.degreeBS-MSen_US
dc.contributor.departmentDept. of Biologyen_US
dc.contributor.registration20181194en_US
Appears in Collections:MS THESES

Files in This Item:
File Description SizeFormat 
20181194_Vaishnavi_V_MS_ThesisMS Thesis2.29 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.