Please use this identifier to cite or link to this item: http://dr.iiserpune.ac.in:8080/xmlui/handle/123456789/6040
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorSREEJITH, G. J.en_US
dc.contributor.authorSINGH, DAYALen_US
dc.date.accessioned2021-07-08T04:46:41Z-
dc.date.available2021-07-08T04:46:41Z-
dc.date.issued2021-07-
dc.identifier.citation74en_US
dc.identifier.urihttp://dr.iiserpune.ac.in:8080/xmlui/handle/123456789/6040-
dc.descriptionTL;DR: ReLU networks initialized with asymmetric anti-correlated weights learn faster.en_US
dc.description.abstractDespite their successful application in various fields of science and technology, deep neural networks remain poorly understood. The ability of overparameterized neural networks to express complex functions (expressivity) is one of the major theoretical questions, among others. Furthermore, the expressivity of a deep neural network at the initialization is a crucial theoretical aspect due to the use of local (gradient-based) algorithms for training, which can be analyzed by considering signal propagation in infinitely wide networks (mean-field limit) as a model. This mean-field analysis suggests that deep neural networks have an ordered and a chaotic phase, and they achieve exponential expressivity as a function of depth in the chaotic phase. However, signals become highly correlated in deep ReLU (Rectified Linear Unit) networks with uncorrelated weights due to the non-existence of a chaotic phase; this suggests that deep ReLU networks have low expressive power. Using the mean-field theory of signal propagation, we analyze the evolution of correlations between signals propagating through a ReLU network with correlated weights. Furthermore, we show that ReLU networks with anti-correlated weights can avoid this low expressivity outcome and have a chaotic phase where the correlations saturate below unity. Consistent with this analysis, we find that networks initialized with anti-correlated weights can train faster by taking advantage of the increased expressivity in the chaotic phase. Combining this with a previously proposed strategy of using an asymmetric initialization to reduce dead node probability (probability of the propagated signals reaching a low sensitivity domain of the ReLU activation function), we propose an initialization scheme that allows faster training and learning than other initialization schemes on various tasks.en_US
dc.description.sponsorshipINSPIRE-SHE program of Department of Science & Technology, Indiaen_US
dc.language.isoenen_US
dc.subjectSignal propagationen_US
dc.subjectDeep Neural Networksen_US
dc.subjectMean-field theoryen_US
dc.titleSignal propagation and Initialization in Deep Neural Networksen_US
dc.typeThesisen_US
dc.type.degreeBS-MSen_US
dc.contributor.departmentDept. of Physicsen_US
dc.contributor.registration20161127en_US
Appears in Collections:MS THESES

Files in This Item:
File Description SizeFormat 
Dayal_thesis_signed_final.pdf8.28 MBAdobe PDFView/Open    Request a copy


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.