Please use this identifier to cite or link to this item:
http://dr.iiserpune.ac.in:8080/xmlui/handle/123456789/2914
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Camarasa, Gerardo-Aragon | en_US |
dc.contributor.author | PORE, AMEYA | en_US |
dc.date.accessioned | 2019-05-06T08:17:17Z | |
dc.date.available | 2019-05-06T08:17:17Z | |
dc.date.issued | 2019-05 | en_US |
dc.identifier.uri | http://dr.iiserpune.ac.in:8080/xmlui/handle/123456789/2914 | - |
dc.description.abstract | Robots have transformed the manufacturing industry and have been used for scientific exploration in human inaccessible environments like distant planets, Oceans, etc. However, a major barrier in its universal adoption is its lack of fragility and robustness in a complex and highly-diverse environment. This project constitutes the initial steps towards flexibility regarding exploration strategies that can be applied to challenging problems for autonomous grasping of rigid and deformable objects. Here, we employ recent advances in Deep Reinforcement learning (RL) to generate simple reactive behaviours like approaching, manipulating and retracting to pick an object. Once such simple behaviours are learnt, these could be sequenced in various combinations to give rise to a complex task. RL is a trial and error optimisation technique where an agent ought to take action in an environment to maximise some notion of cumulative reward. Current research in RL has been formulated on traditional techniques such as Deep Q-learning and policy gradient methods. These methods have worked well when the feedback/reward is dense. Perhaps, in real-life scenarios, the feedback is sparse, and these methods tend to fail in finding the optimum solutions and exploring the environment robustly. In this work, we have implemented two different approaches to solve such sparse reward problem, namely Curiosity and Reactive behaviour repertoire for long time step tasks. Our results have shown an immense reduction in training steps required to reach the maximum reward state in high-dimensional continuous action space compared to the baselines. | en_US |
dc.description.sponsorship | Erasmus+ | en_US |
dc.language.iso | en | en_US |
dc.subject | 2019 | |
dc.subject | Artificial Intelligence | en_US |
dc.subject | Machine learning | en_US |
dc.subject | Reinforcement learning | en_US |
dc.subject | Robotics | en_US |
dc.subject | Computer vision | en_US |
dc.subject | Computing Science | en_US |
dc.subject | Behaviour based robotics | en_US |
dc.title | Reactive Reinforcement learning for Robotic Manipulation | en_US |
dc.type | Thesis | en_US |
dc.type.degree | BS-MS | en_US |
dc.contributor.department | Interdisciplinary | en_US |
dc.contributor.registration | 20141119 | en_US |
Appears in Collections: | MS THESES |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Final_MS_thesis_after_correction .pdf | 916.5 kB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.