Please use this identifier to cite or link to this item:
http://dr.iiserpune.ac.in:8080/xmlui/handle/123456789/7503
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Turner, Glenn | |
dc.contributor.author | MOHANTA, RISHIKA | |
dc.date.accessioned | 2022-12-14T03:47:33Z | |
dc.date.available | 2022-12-14T03:47:33Z | |
dc.date.issued | 2022-12 | |
dc.identifier.citation | 171 | en_US |
dc.identifier.uri | http://dr.iiserpune.ac.in:8080/xmlui/handle/123456789/7503 | |
dc.description.abstract | Navigating the world requires an animal to make choices in a dynamic and uncertain world. Therefore, animals can benefit by adapting their behavior to past experiences, but the exact nature of the computations performed and their neural implementations are currently unclear. Extensive prior knowledge about fruit flies (D. melanogaster) provides a unique opportunity to explore the mechanistic basis of cognitive factors underlying decision-making. However, to disentangle between different mechanisms, we require a large number of choice trajectories from single flies. We, therefore, scale-up a Y-maze olfactory choice assay to run 16 flies in parallel to allow us to build and test better models using behavioral perturbation methods such as choice engineering. We take two complementary approaches to explore various learning rules that the fly may use - a model-fitting approach and a novel de-novo learning rule synthesis approach. Firstly, we fit increasingly complex reinforcement learning rules to explain choice. We find that approximating perseverance/habits explains and predicts individual choice outcomes. Next, we develop a flexible framework using small neural networks to infer learning rules and predict choices. We find that small neural networks with less than < 5 neurons trained to estimate odor values can accurately predict decisions across flies better than the best reinforcement learning models. We analyze the functioning of these networks to reveal underlying dynamics that reiterate the presence of perseverance behavior. We successfully reproduce most of our observations across different behavioral setups. Our results suggest that habit-forming tendencies beyond naive reward-seeking may influence flies’ choices. | en_US |
dc.description.sponsorship | Howard Hughes Medical Institute; Kishore Vaigyanik Protsahan Yojna (KVPY) Fellowship [SB-1712051] | en_US |
dc.language.iso | en_US | en_US |
dc.subject | Decision Making | en_US |
dc.subject | Drosophila | en_US |
dc.subject | Learning Rules | en_US |
dc.subject | Cognition | en_US |
dc.subject | Reinforcement Learning | en_US |
dc.subject | Habitual Behavior | en_US |
dc.title | Deciphering value learning rules in fruit flies using a model-driven approach | en_US |
dc.type | Thesis | en_US |
dc.description.embargo | One Year | en_US |
dc.type.degree | BS-MS | en_US |
dc.contributor.department | Dept. of Biology | en_US |
dc.contributor.registration | 20171096 | en_US |
Appears in Collections: | MS THESES |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
20171096_Rishika_Mohanta_MS_Thesis.pdf | MS Thesis | 5.79 MB | Adobe PDF | View/Open |
20171096_RM_figures_only.pdf | High Resolution Figures | 19.42 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.