Many challenging reinforcement learning (RL) problems require designing a distribution of tasks that can be applied to train effective policies. This distribution of tasks can be specified by the curriculum. A curriculum is meant to improve the results of learning and accelerate it. We introduce Success Induced Task Prioritization (SITP), a framework for automatic curriculum learning, where a task sequence is created based on the success rate of each task. In this setting, each task is an algorithmically created environment instance with a unique configuration. The algorithm selects the order of tasks that provide the fastest learning for agents. The probability of selecting any of the tasks for the next stage of learning is determined by evaluating its performance score in previous stages. Experiments were carried out in the Partially Observable Grid Environment for Multiple Agents (POGEMA) and Procgen benchmark. We demonstrate that SITP matches or surpasses the results of other curriculum design methods. Our method can be implemented with handful of minor modifications to any standard RL framework and provides useful prioritization with minimal computational overhead.
DOI: 10.1007/978-3-031-19493-1_8
Download PDF from arXiv: https://arxiv.org/abs/2301.00691
Download PDF or read online at ResearchGate: https://www.researchgate.net/publication/364628530_Reinforcement_Learning_with_Success_Induced_Task_Prioritization
Nesterova, M., Skrynnik, A., Panov, A. (2022). Reinforcement Learning with Success Induced Task Prioritization // Advances in Computational Intelligence. MICAI 2022. Lecture Notes in Computer Science, vol 13612, pp. 97–107.