In this study, we address the issue of enabling an artificial intelligence agent to execute complex language instructions within virtual environments. In our framework, we assume that these instructions involve intricate linguistic structures and multiple interdependent tasks that must be navigated successfully to achieve the desired outcomes. To effectively manage these complexities, we propose a hierarchical framework that combines the deep language comprehension of large language models with the adaptive action-execution capabilities of reinforcement learning agents. The language module (based on LLM) translates the language instruction into a high-level action plan, which is then executed by a pre-trained reinforcement learning agent. We have demonstrated the effectiveness of our approach in two different environments: in IGLU, where agents are instructed to build structures, and in Crafter, where agents perform tasks and interact with objects in the surrounding environment according to language commands.
DOI: 10.3233/FAIA240545
Download the article (PDF) from the IOS Press publisher: https://ebooks.iospress.nl/volumearticle/69640
Download the conference proceedings (PDF) from the IOS Press publisher: https://ebooks.iospress.nl/doi/10.3233/FAIA392
Download PDF from arXiv.org: https://arxiv.org/abs/2407.09287
Zoya Volovikova, Alexey Skrynnik, Petr Kuderov, Aleksandr I. Panov. Instruction Following with Goal-Conditioned Reinforcement Learning in Virtual Environments // Proceedings of ECAI-2024, the 27th European Conference on Artificial Intelligence, 19–24 October 2024, Santiago de Compostela, Spain — Including 13th Conference on Prestigious Applications of Intelligent Systems (PAIS 2024). Volume 392. IOS Press, 2024. Pp. 650–657.