Intrinsic Motivation in Model-based Reinforcement Learning: A Brief Review

Authors

Panov A.

Annotation

The reinforcement learning research area contains a wide range of methods for solving the problems of intelligent agent control. Despite the progress that has been made, the task of creating a highly autonomous agent is still a significant challenge. One potential solution to this problem is intrinsic motivation, a concept derived from developmental psychology. This review considers the existing methods for determining intrinsic motivation based on the world model obtained by the agent. We propose a systematic approach to current research in this field, which consists of three categories of methods, distinguished by the way they utilize a world model in the agent's components: complementary intrinsic reward, exploration policy, and intrinsically motivated goals. The proposed unified framework describes the architecture of agents using a world model and intrinsic motivation to improve learning. The potential for developing new techniques in this area of research is also examined.

External links

DOI: 10.48550/arXiv.2301.10067

Download PDF from arXiv: https://arxiv.org/pdf/2301.10067

ResearchGate: https://www.researchgate.net/publication/367389155_Intrinsic_Motivation_in_Model-based_Reinforcement_Learning_A_Brief_Review

Reference link

Latyshev, A., & Panov, A. I. (2023). Intrinsic Motivation in Model-based Reinforcement Learning: A Brief Review // arXiv preprint arXiv:2301.10067.