Markov decision processes
Markov decision processes (MDPs) offer an elegant mathematical framework
for representing planning and decision problems in the presence of
uncertainty. However, a simple textbook MDP uses discrete state, discrete
time and it does not consider structure when modeling the process dynamics.
Such a representation is very limited in its scope to represent realworld domains, which
are often factorized, include continuous quantities (such as,
temperature, speed, position, etc.) and/or imperfect observations.
The aim of our research is (1) to devise MDP models
that offer more natural representations of complex realworld decision problems, and
(2) to develop algorithmic solutions that let us solve these problems more efficiently.
 Approximate Linear Programming (ALP) for solving large factored MDPs with
continuous and hybrid state components.
 Partially observable Markov decision processes and their approximations.
 Hierarchical decomposition methods.
 Applications of MDPs to medicine, investments, agent
navigation, traffic flow optimizations.
Our most recent MDP research work focused on the development of Approximate Linear Programming (ALP) methods
for solving large factored MDP with continuous or hybrid state and action spaces.
We experimentally showed that we can solve large temporal optimization problems with
highdimensional statespaces and outperform existing discretization
approaches, both in terms of result quality and the efficiency of
computation.

Continuous and hybridstate MDPs
 Branislav Kveton, Milos Hauskrecht.
Partitioned Linear Programming Approximations for MDPs
In Proceedings of the 24th Conference on Uncertainty in Artificial Intelligence, Helsinki, Finland, July 2008.
 B. Kveton, M. Hauskrecht, C. Guestrin.
Solving
Factored MDPs with Hybrid State and Action Variables.
Journal of Artificial Intelligence Research . accepted for
publication. 2006.
 B. Kveton and M. Hauskrecht.
Learning Basis Functions in Hybrid Domains.
Proceedings of 21st National Conference on AI (AAAI06), Boston, MA, July 2006.
 B. Kveton and M. Hauskrecht.
Solving Factored MDPs with
ExponentialFamily Transition Models.
In Proceedings of the 16th International Conference
on Planning and Scheduling, UK, June 2006.
 M. Hauskrecht and B. Kveton.
Approximate Linear Programming
for Solving Hybrid Factored MDPs.
Proceedings of the 9th International Symposium on Artificial
Intelligence and Mathematics , Fort Lauderdale, Florida, January 2006.
 B. Kveton and M. Hauskrecht.
An MCMC Approach to Solving Hybrid Factored MDPs.
In Proceedings of the 19th International Joint Conference on Artificial Intelligence , Edinburgh, Scotland, August 2005.
 C. Guestrin, M. Hauskrecht, B. Kveton.
Solving Factored MDPs with
Continuous and Discrete Variables.
Proceedings of the AAAI Workshop on Learning and Planning in Markov
Processes  Advances and Challenges, pages 1924, August 2004.
 C. Guestrin, M. Hauskrecht, B. Kveton.
Solving Factored MDPs with
Continuous and Discrete Variables.
In Proceedings of the 20th
Conference on Uncertainty in Artificial Intelligence
, pages 235242, July 2004.
 B. Kveton, M. Hauskrecht.
Heuristic Refinements of Approximate
Linear Programming for Factored ContinuousState Markov Decision
Processes.
In Proceedings of the 14th International
Conference on Automated Planning and Scheduling, pages 306314, June 2004.
 M. Hauskrecht, B. Kveton.
Linear program approximations for
factored continuousstate Markov Decision Processes.
Advances in Neural Information Processing Systems 16 , pages 895
902, December 2003.

Partially observable MDPs:
 M. Hauskrecht.
Valuefunction
approximations for partially observable Markov decision processes .
Journal of Artificial Intelligence Research, vol.13, pp. 3394, 2000.
 M. Hauskrecht, H. Fraser.
Planning treatment of ischemic
heart disease with partially observable Markov decision processes.
Artificial Intelligence in Medicine, vol. 18, pp. 221244, 2000.
 M. Hauskrecht.
Planning and
control in stochastic domains with imperfect information.
PhD
dissertation, MITLCSTR738, 1997.
 M. Hauskrecht.
Incremental
methods for computing bounds in partially observable Markov decision
processes.
In Proceedings of the 14th National Conference on
Artificial Intelligence, Providence, RI, pp. 734739, 1997.

Hierarchical MDPs and decomposition methods
 M. Hauskrecht, N. Meuleau, C. Boutilier, L. Pack Kaelbling, T. Dean.
Hierarchical
solution of Markov decision processes using macroactions.
In
Proceedings of the 14th Conference on Uncertainty in Artificial Intelligence,
pp. 220229, 1998.
 N. Meuleau, M. Hauskrecht, K. Kim, L. Peshkin, L. Pack Kaelbling,
T. Dean, C. Boutilier.
Solving very large
weaklycoupled Markov decision processes.
In Proceedings of the 15th
National Conference on Artificial Intelligence, Madison, WI, pp. 165172,
1998.
 M. Hauskrecht.
Planning
with macroactions: Effect of initial value function estimate on the convergence
rate of value iteration.
Working paper, 1998.

Applications to medicine and investments
 M. Hauskrecht, L. Ortiz, I. Tsochantaridis, E. Upfal.
Efficient
methods for computing investment strategies for multimarket commodity
trading.
Applied Artificial Intelligence, vol. 15, pp. 429452,
2001.
 M. Hauskrecht.
Evaluation and optimization of management plans in
stochastic domains with imperfect information.
In Proceedings of
the Twelfth International Workshop on Principles of Diagnosis, pp. 7178, 2001.
 M. Hauskrecht, G. Pandurangan, E. Upfal.
Computing
near optimal strategies for stochastic investment planning
problems.
In Proceedings of the 16th International Joint
Conference on Artificial Intelligence, pp. 13101315, 1999.
 M. Hauskrecht, H. Fraser.
Modeling
Treatment of Ischemic Heart Disease with Partially Observable Markov Decision
Processes.
In Proceedings of American Medical Informatics Association
annual symposium on Computer Applications in Health Care, Orlando,
Florida, pp. 538542, 1998.

SemiMarkov models for vehicle routing optimization
The web page is updated by milos.