Exploration is an essential part of reinforcement learning, which restricts the quality of learned policy. Hard-exploration environments are defined by huge state space and sparse rewards. In such conditions, an exhaustive exploration of the environment is often impossible, and the successful training of an agent requires a lot of interaction steps. In this paper, we propose an exploration method called Rollback-Explore (RbExplore), which utilizes the concept of the persistent Markov decision process, in which agents during training can roll back to visited states. We test our algorithm in the hard-exploration Prince of Persia game, without rewards and domain knowledge. At all used levels of the game, our agent outperforms or shows comparable results with state-of-the-art curiosity methods with knowledge-based intrinsic motivation: ICM and RND. An implementation of RbExplore can be found at https://github.com/cds-mipt/RbExplore
DOI: 10.1007/978-3-030-89817-5_8
Download PDF at arXiv.org: https://arxiv.org/pdf/2109.10173.pdf
RbExplore build at GetHub: https://github.com/cds-mipt/RbExplore
ResearchGate: https://www.researchgate.net/publication/355472808_Long-Term_Exploration_in_Persistent_MDPs
Ugadiarov, L., Skrynnik, A., Panov, A. I. Long-Term Exploration in Persistent MDPs // Advances in Soft Computing. MICAI 2021. Part I. Lecture Notes in Computer Science, Vol.13067, 2021, pp.108-120.