Reinforcement learning with temperature distribution based on likelihood function

Norimasa Kobori, Kenji Suzuji, Pitoyo Hartono, Shuji Hashimoto

    研究成果査読

    2 被引用数 (Scopus)

    抄録

    In the existing Reinforcement Learning, it is difficult and time consuming to find appropriate the meta-parameters such as learning rate, eligibility traces and temperature for exploration, in particular on a complicated and large-scale problem, the delayed reward often occurs and causes a difficulty in solving the problem. In this paper, we propose a novel method introducing a temperature distribution for reinforcement learning. In addition to the acquirement of policy based on profit sharing, the temperature is given to each state and is trained by hill-climbing method using likelihood function based on success and failure of the task. The proposed method can reduce the parameter setting according to the given problems. We showed the performance on the grid world problem and the control of Acrobot.

    本文言語English
    ページ(範囲)297-305
    ページ数9
    ジャーナルTransactions of the Japanese Society for Artificial Intelligence
    20
    4
    DOI
    出版ステータスPublished - 2005

    ASJC Scopus subject areas

    • 人工知能

    フィンガープリント

    「Reinforcement learning with temperature distribution based on likelihood function」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

    引用スタイル