Markov decision processes

Markov decision processes (MDPs) offer an elegant mathematical framework for representing planning and decision problems in the presence of uncertainty. However, a simple textbook MDP uses discrete state, discrete time and it does not consider structure when modeling the process dynamics. Such a representation is very limited in its scope to represent real-world domains, which are often factorized, include continuous quantities (such as, temperature, speed, position, etc.) and/or imperfect observations. The aim of our research is (1) to devise MDP models that offer more natural representations of complex real-world decision problems, and (2) to develop algorithmic solutions that let us solve these problems more efficiently.

Research projects:

Our most recent MDP research work focused on the development of Approximate Linear Programming (ALP) methods for solving large factored MDP with continuous or hybrid state and action spaces. We experimentally showed that we can solve large temporal optimization problems with high-dimensional state-spaces and outperform existing discretization approaches, both in terms of result quality and the efficiency of computation.

CS team members:

Publications:


The web page is updated by milos.