In this paper, we consider the problem of multi-agent navigation in partially observable grid environments. This problem is challenging for centralized planning approaches as they typically rely on full knowledge of the environment. To this end, we suggest utilizing the reinforcement learning approach when the agents first learn the policies that map observations to actions and then follow these policies to reach their goals. To tackle the challenge associated with learning cooperative behavior, i.e. in many cases agents need to yield to each other to accomplish a mission, we use a mixing Q-network that complements learning individual policies. In the experimental evaluation, we show that such approach leads to plausible results and scales well to a large number of agents.
PDF на arXiv.org (англ.): https://arxiv.org/pdf/2108.06148
Читать в Google Книгах (англ.): https://books.google.ru/books?id=_SJGEAAAQBAJ&pg=PA169
Microsoft Academic: https://academic.microsoft.com/paper/3203944359/
Semantic Scholar: https://api.semanticscholar.org/CorpusID:237048119
Davydov V., Skrynnik A., Yakovlev K., Panov A. (2021) Q-Mixing Network for Multi-agent Pathfinding in Partially Observable Grid Environments. In: Kovalev S.M., Kuznetsov S.O., Panov A.I. (eds) Artificial Intelligence. RCAI 2021. Lecture Notes in Computer Science, vol 12948. Springer, Cham. Pages 169-179