Long-Term Exploration in Persistent MDPs


Панов А. И. Скрынник А. А. Угадяров Л. А.


Exploration is an essential part of reinforcement learning, which restricts the quality of learned policy. Hard-exploration environments are defined by huge state space and sparse rewards. In such conditions, an exhaustive exploration of the environment is often impossible, and the successful training of an agent requires a lot of interaction steps. In this paper, we propose an exploration method called Rollback-Explore (RbExplore), which utilizes the concept of the persistent Markov decision process, in which agents during training can roll back to visited states. We test our algorithm in the hard-exploration Prince of Persia game, without rewards and domain knowledge. At all used levels of the game, our agent outperforms or shows comparable results with state-of-the-art curiosity methods with knowledge-based intrinsic motivation: ICM and RND. An implementation of RbExplore can be found at https://github.com/cds-mipt/RbExplore

Внешние ссылки

DOI: 10.1007/978-3-030-89817-5_8

Скачать PDF в архиве arXiv.org (англ.): https://arxiv.org/pdf/2109.10173.pdf

Билд RbExplore на GetHub: https://github.com/cds-mipt/RbExplore

ResearchGate: https://www.researchgate.net/publication/355472808_Long-Term_Exploration_in_Persistent_MDPs

Ссылка при цитировании

Угадяров Л., Скрынник А., Панов А. И. Long-Term Exploration in Persistent MDPs // Advances in Soft Computing. MICAI 2021. Part I. Lecture Notes in Computer Science, Vol.13067, 2021, pp.108-120.