Abstract:
A Markovian queueing model is considered in which servers of various types work in parallel to process jobs from a number of classes at rates $\mu_{ij}$ that depend on the class, $i$, and the type, $j$. The problem of dynamic resource allocation so as to minimize a risk-sensitive criterion is studied in a law-of-large-numbers scaling. Letting $X_i(t)$ denote the number of class-$i$ jobs in the system at time $t$, the cost is given by $E\exp\{n[\int_0^Th(\bar X(t))dt+g(\bar X(T))]\}$, where $T>0$, $h$ and $g$ are given functions satisfying regularity and growth conditions, and $\bar X=\bar X^n=n^{-1}X(n\cdot)$. It is well known in an analogous context of controlled diffusion, and has been shown for some classes of stochastic networks, that the limit behavior, as $n\to\infty$, is governed by a differential game (DG) in which the state dynamics is given by a fluid equation for the formal limit $\varphi$ of $\bar X$, while the cost consists of $\int_0^Th (\varphi(t))dt+g(\varphi(T))$ and an additional term that originates from the underlying large-deviation rate function. We prove that a DG of this type indeed governs the asymptotic behavior, that the game has value, and that the value can be characterized by the corresponding Hamilton--Jacobi--Isaacs equation. The framework allows for both fixed and a growing number of servers $N\to\infty$, provided $N=o(n)$.