Please use this identifier to cite or link to this item: http://dr.iiserpune.ac.in:8080/xmlui/handle/123456789/4877
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorGavenciak, Tomasen_US
dc.contributor.authorSARKAR, SAYANen_US
dc.date.accessioned2020-07-13T04:01:18Z-
dc.date.available2020-07-13T04:01:18Z-
dc.date.issued2019-11en_US
dc.identifier.urihttp://dr.iiserpune.ac.in:8080/xmlui/handle/123456789/4877-
dc.description.abstractHow do we teach machines to do something that we can perform reasonably well, but cannot easily express as a utility maximization problem? Can machines learn underlying utility of a domain from many human demonstrations? The goal of the field of Inverse Reinforcement Learning (IRL) is to infer the crux of the goal of a domain from expert (human) demonstrations. This thesis categorically surveys the current IRL literature with a formal introduciton and motivation for the problem. We discuss the central challenges of the domain and expound upon how different algorithms deal with the challenges. We propose an reformulation of the IRL problem by including ranked set of trajectories of different levels of expert capability and discuss how that might lead towards a new set of algorithms in the field, motivated by some very recently developed approaches. We conclude with discussing some broad advances in the research area and possibilities for further extension.en_US
dc.language.isoenen_US
dc.subjectMathematicsen_US
dc.subject2020en_US
dc.titleFrontiers in Inverse Reinforcement Learningen_US
dc.typeThesisen_US
dc.type.degreeBS-MSen_US
dc.contributor.departmentDept. of Mathematicsen_US
dc.contributor.registration20141132en_US
Appears in Collections:MS THESES

Files in This Item:
File Description SizeFormat 
thesis.pdf(2).pdfMS Thesis1.68 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.