Please use this identifier to cite or link to this item: http://dr.iiserpune.ac.in:8080/xmlui/handle/123456789/2914
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorCamarasa, Gerardo-Aragonen_US
dc.contributor.authorPORE, AMEYAen_US
dc.date.accessioned2019-05-06T08:17:17Z
dc.date.available2019-05-06T08:17:17Z
dc.date.issued2019-05en_US
dc.identifier.urihttp://dr.iiserpune.ac.in:8080/xmlui/handle/123456789/2914-
dc.description.abstractRobots have transformed the manufacturing industry and have been used for scientific exploration in human inaccessible environments like distant planets, Oceans, etc. However, a major barrier in its universal adoption is its lack of fragility and robustness in a complex and highly-diverse environment. This project constitutes the initial steps towards flexibility regarding exploration strategies that can be applied to challenging problems for autonomous grasping of rigid and deformable objects. Here, we employ recent advances in Deep Reinforcement learning (RL) to generate simple reactive behaviours like approaching, manipulating and retracting to pick an object. Once such simple behaviours are learnt, these could be sequenced in various combinations to give rise to a complex task. RL is a trial and error optimisation technique where an agent ought to take action in an environment to maximise some notion of cumulative reward. Current research in RL has been formulated on traditional techniques such as Deep Q-learning and policy gradient methods. These methods have worked well when the feedback/reward is dense. Perhaps, in real-life scenarios, the feedback is sparse, and these methods tend to fail in finding the optimum solutions and exploring the environment robustly. In this work, we have implemented two different approaches to solve such sparse reward problem, namely Curiosity and Reactive behaviour repertoire for long time step tasks. Our results have shown an immense reduction in training steps required to reach the maximum reward state in high-dimensional continuous action space compared to the baselines.en_US
dc.description.sponsorshipErasmus+en_US
dc.language.isoenen_US
dc.subject2019
dc.subjectArtificial Intelligenceen_US
dc.subjectMachine learningen_US
dc.subjectReinforcement learningen_US
dc.subjectRoboticsen_US
dc.subjectComputer visionen_US
dc.subjectComputing Scienceen_US
dc.subjectBehaviour based roboticsen_US
dc.titleReactive Reinforcement learning for Robotic Manipulationen_US
dc.typeThesisen_US
dc.type.degreeBS-MSen_US
dc.contributor.departmentInterdisciplinaryen_US
dc.contributor.registration20141119en_US
Appears in Collections:MS THESES

Files in This Item:
File Description SizeFormat 
Final_MS_thesis_after_correction .pdf916.5 kBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.