%0 Journal Article %T Planning in Markov Stochastic Task Domains %A Yong Lin %A Fillia Makedon - United States of America %J International Journal of Artificial Intelligence and Expert Systems %D 2010 %I Computer Science Journals %X In decision theoretic planning, a challenge for Markov decision processes (MDPs) and partially observable Markov decision processes (POMDPs) is, many problem domains contain big state spaces and complex tasks, which will result in poor solution performance. We develop a task analysis and modeling (TAM) approach, in which the (PO)MDP model is separated into a task view and an action view. In the task view, TAM models the problem domain using a task equivalence model, with task-dependent abstract states and observations. We provide a learning algorithm to obtain the parameter values of task equivalence models. We present three typical examples to explain the TAM approach. Experimental results indicate our approach can greatly improve the computational capacity of task planning in Markov stochastic domains. %K Markov decision processes %K POMDP %K task planning %K uncertainty %K decision-making %U http://www.cscjournals.org/csc/manuscriptinfo.php?ManuscriptCode=71.72.63.67.43.47.51.100&JCode=IJAE&EJCode=64.65.56.60.107&Volume=48.99&Issue=46.103