Please use this identifier to cite or link to this item: http://dr.iiserpune.ac.in:8080/xmlui/handle/123456789/6086
Full metadata record
DC FieldValueLanguage
dc.contributor.authorArapostathis, Arien_US
dc.contributor.authorBISWAS, ANUPen_US
dc.contributor.authorPRADHAN, SOMNATHen_US
dc.date.accessioned2021-07-23T10:25:21Z
dc.date.available2021-07-23T10:25:21Z
dc.date.issued2021-08en_US
dc.identifier.citationProceedings of the Royal Society of Edinburgh Section A-Mathematics, 151(4), 1305-1330.en_US
dc.identifier.issn0308-2105en_US
dc.identifier.issn1473-7124en_US
dc.identifier.urihttp://dr.iiserpune.ac.in:8080/xmlui/handle/123456789/6086-
dc.identifier.urihttps://doi.org/10.1017/prm.2020.61en_US
dc.description.abstractIn this article we consider the ergodic risk-sensitive control problem for a large class of multidimensional controlled diffusions on the whole space. We study the minimization and maximization problems under either a blanket stability hypothesis, or a near-monotone assumption on the running cost. We establish the convergence of the policy improvement algorithm for these models. We also present a more general result concerning the region of attraction of the equilibrium of the algorithm.en_US
dc.language.isoenen_US
dc.publisherCambridge University Pressen_US
dc.subjectPrincipal eigenvalueen_US
dc.subjectSemilinear differential equationsen_US
dc.subjectStochastic representationen_US
dc.subjectPolicy improvementen_US
dc.subject2021-JUL-WEEK3en_US
dc.subjectTOC-JUL-2021en_US
dc.subject2021en_US
dc.titleOn the policy improvement algorithm for ergodic risk-sensitive controlen_US
dc.typeArticleen_US
dc.contributor.departmentDept. of Mathematicsen_US
dc.identifier.sourcetitleProceedings of the Royal Society of Edinburgh Section A-Mathematicsen_US
dc.publication.originofpublisherForeignen_US
Appears in Collections:JOURNAL ARTICLES

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.