We investigate, via the dynamic programming approach, optimal control problems of infinite horizon with state constraint, where the state Xt is given as a solution of a controlled stochastic differential equation and the state constraint is described either by the condition that Xt e Ḡ for all t > 0 or by the condition that Xt ∈ G for all t > 0, where G be a given open subset of RN. Under the assumption that for each z ∈ ∂G there exists az ∈ A, where A denotes the control set, such that the diffusion matrix σ(x, a) vanishes for a = az and for x ∈ ∂G in a neighborhood of z and the drift vector b(x, a) directs inside of G at z for a = az and x = z as well as some other mild assumptions, we establish the unique existence of a continuous viscosity solution of the state constraint problem for the associated Hamilton-Jacobi-Bellman equation, prove that the value functions V associated with the constraint Ḡ, Vr of the relaxed problem associated with the constraint Ḡ, and V0 associated with the constraint G, satisfy in the viscosity sense the state constraint problem, and establish Holder regularity results for the viscosity solution of the state constraint problem.
|ジャーナル||Indiana University Mathematics Journal|
|出版物ステータス||Published - 2002|
ASJC Scopus subject areas