Seguir
Raúl Montes-de-Oca
Raúl Montes-de-Oca
Profesor Investigador del Departamento de Matemáticas, UAM-Iztapalapa
Dirección de correo verificada de xanum.uam.mx - Página principal
Título
Citado por
Citado por
Año
Recurrence conditions for Markov decision processes with Borel state space: a survey
O Hernández-Lerma, RÚ Montes-De-Oca, R Cavazos-Cadena
Annals of Operations Research 28 (1), 29-46, 1991
761991
Conditions for the uniqueness of optimal policies of discounted Markov decision processes
D Cruz-Suárez, R Montes-de-Oca, F Salem-Silva
Mathematical Methods of Operations Research 60, 415-436, 2004
442004
The value iteration algorithm in risk-sensitive average Markov decision chains with finite state space
R Cavazos-Cadena, R Montes-de-Oca
Mathematics of Operations Research 28 (4), 752-776, 2003
382003
Discounted Markov control processes induced by deterministic systems
H Cruz-Suárez, R Montes-de-Oca
Kybernetika 42 (6), 647-664, 2006
222006
An envelope theorem and some applications to discounted Markov decision processes
H Cruz-Suárez, R Montes-de-Oca
Mathematical Methods of Operations Research 67 (2), 299-321, 2008
212008
Application of average dynamic programming to inventory systems
O Vega-Amaya, R Montes-de-Oca
Mathematical methods of operations research 47 (3), 451-471, 1998
201998
Value iteration in average cost Markov control processes on Borel spaces
R Montes-de-Oca, O Hernández-Lerma
Acta Applicandae Mathematica 42 (2), 203-222, 1996
181996
Conditions for average optimality in Markov control processes with unbounded costs and controls
R Montes-de-Oca, O Hernández-Lerma
Centro de Investigacion y de Estudios Avanzados del IPN. Departamento de …, 1992
181992
Nearly optimal policies in risk-sensitive positive dynamic programming on discrete spaces
R Cavazos-Cadena, R Montes-de-Oca
Mathematical Methods of Operations Research 52, 133-167, 2000
172000
Optimal stationary policies inrisk-sensitive dynamic programs with finite state spaceand nonnegative rewards
R Cavazos-Cadena, R Montes-de-Oca
Applicationes Mathematicae 27 (2), 167-185, 2000
172000
A note on the existence of optimal policies in total reward dynamic programs with compact action sets
R Cavazos-Cadena, EA Feinberg, R Montes-De-Oca
Mathematics of Operations Research 25 (4), 657-666, 2000
152000
The average cost optimality equation for Markov control processes on Borel spaces
R Montes-de-Oca
Systems & control letters 22 (5), 351-357, 1994
151994
Markov decision processes on Borel spaces with total cost and random horizon
H Cruz-Suárez, R Ilhuicatzi-Roldán, R Montes-de-Oca
Journal of Optimization Theory and Applications 162, 329-346, 2014
142014
Discounted cost optimality problem: stability with respect to weak metrics
E Gordienko, E Lemus-Rodríguez, R Montes-de-Oca
Mathematical Methods of Operations Research 68, 77-96, 2008
122008
A consumption-investment problem modelled as a discounted Markov decision process
H Cruz-Suárez, R Montes-de-Oca, G Zacarías
Kybernetika 47 (6), 909-929, 2011
112011
Uniform convergence of value iteration policies for discounted Markov decision processes
D Cruz-Suárez, RM de Oca
Boletín de la Sociedad Matemática Mexicana: Tercera Serie 12 (1), 133-148, 2006
112006
Average cost Markov control processes: stability with respect to the Kantorovich metric
E Gordienko, E Lemus-Rodríguez, R Montes-de-Oca
Mathematical Methods of Operations Research 70, 13-33, 2009
92009
Nonstationary value iteration in controlled Markov chains with risk-sensitive average criterion
R Cavazos-Cadena, R Montes-De-Oca
Journal of Applied Probability 42 (4), 905-918, 2005
92005
Estimates for perturbations of average Markov decision processes with a minimal state and upper bounded by stochastically ordered Markov chains
R Montes-de-Oca, F Salem-Silva
Kybernetika 41 (6), [757]-772, 2005
92005
A counterexample on sample-path optimality in stable Markov decision chains with the average reward criterion
R Cavazos-Cadena, R Montes-de-Oca, K Sladký
Journal of Optimization Theory and Applications 163, 674-684, 2014
82014
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20