著者
木村 元 山村 雅幸 小林 重信
出版者
社団法人人工知能学会
雑誌
人工知能学会誌 (ISSN:09128085)
巻号頁・発行日
vol.11, no.5, pp.761-768, 1996-09-01
被引用文献数
60

Many conventional works in reinforcement learning are limited to Markov decision processes (MDPs). However, real world decision tasks are essentially non-Markovian. In this paper, we consider reinforcement learning in partially observable MDPs(POMDPs) that is a class of non-Markovian decision problems. In POMDPs assumption, the environment is MDP, but an agent has restricted access to state information. Instead, the agent receives observation containing some information about states of the MDP. Also we focus on a learnig algorithm for memory-less stochastic policies that map the immediate observation of the agent into actions: The memory-less approaches are suited for on-line and real-time adaptive systems that have limited memory and computational resources. Then, the following mathematical results are got. First, it can improve its policy to maximize immediate reward by stochastic gradient ascent without estimating any state or immediate reward. Second, it can improve the policy to maximize discounted reward in an initial state by stochastic gradient ascent without estimating any state, immediate reward or discounted reward. The above advantages are remarkably effective in POMDPs, because it is not required to estimate any states, immediate reward or discounted reward explicitly. Making use of these results, we present an incremental policy improvement algorithm to maximize the average reward in POMDPs. We ensure the rational behavior of the proposed algorithm in a simple experiment.

言及状況

はてなブックマーク (1 users, 1 posts)

収集済み URL リスト