dc.contributor.author |
Arapostathis, Ari |
en_US |
dc.contributor.author |
BISWAS, ANUP |
en_US |
dc.contributor.author |
PRADHAN, SOMNATH |
en_US |
dc.date.accessioned |
2021-07-23T10:25:21Z |
|
dc.date.available |
2021-07-23T10:25:21Z |
|
dc.date.issued |
2021-08 |
en_US |
dc.identifier.citation |
Proceedings of the Royal Society of Edinburgh Section A-Mathematics, 151(4), 1305-1330. |
en_US |
dc.identifier.issn |
0308-2105 |
en_US |
dc.identifier.issn |
1473-7124 |
en_US |
dc.identifier.uri |
http://dr.iiserpune.ac.in:8080/xmlui/handle/123456789/6086 |
|
dc.identifier.uri |
https://doi.org/10.1017/prm.2020.61 |
en_US |
dc.description.abstract |
In this article we consider the ergodic risk-sensitive control problem for a large class of multidimensional controlled diffusions on the whole space. We study the minimization and maximization problems under either a blanket stability hypothesis, or a near-monotone assumption on the running cost. We establish the convergence of the policy improvement algorithm for these models. We also present a more general result concerning the region of attraction of the equilibrium of the algorithm. |
en_US |
dc.language.iso |
en |
en_US |
dc.publisher |
Cambridge University Press |
en_US |
dc.subject |
Principal eigenvalue |
en_US |
dc.subject |
Semilinear differential equations |
en_US |
dc.subject |
Stochastic representation |
en_US |
dc.subject |
Policy improvement |
en_US |
dc.subject |
2021-JUL-WEEK3 |
en_US |
dc.subject |
TOC-JUL-2021 |
en_US |
dc.subject |
2021 |
en_US |
dc.title |
On the policy improvement algorithm for ergodic risk-sensitive control |
en_US |
dc.type |
Article |
en_US |
dc.contributor.department |
Dept. of Mathematics |
en_US |
dc.identifier.sourcetitle |
Proceedings of the Royal Society of Edinburgh Section A-Mathematics |
en_US |
dc.publication.originofpublisher |
Foreign |
en_US |