Q-Mixing Network for Multi-agent Pathfinding in Partially Observable Grid Environments

Authors

Panov A. Yakovlev K. Skrynnik A.

Annotation

In this paper, we consider the problem of multi-agent navigation in partially observable grid environments. This problem is challenging for centralized planning approaches as they typically rely on full knowledge of the environment. To this end, we suggest utilizing the reinforcement learning approach when the agents first learn the policies that map observations to actions and then follow these policies to reach their goals. To tackle the challenge associated with learning cooperative behavior, i.e. in many cases agents need to yield to each other to accomplish a mission, we use a mixing Q-network that complements learning individual policies. In the experimental evaluation, we show that such approach leads to plausible results and scales well to a large number of agents.

External links

DOI: 10.1007/978-3-030-86855-0_12

PDF at arXiv.org: https://arxiv.org/pdf/2108.06148

Read at Google Books: https://books.google.ru/books?id=_SJGEAAAQBAJ&pg=PA169

Microsoft Academic: https://academic.microsoft.com/paper/3203944359/

ResearchGate: https://www.researchgate.net/publication/355051699_Q-Mixing_Network_for_Multi-agent_Pathfinding_in_Partially_Observable_Grid_Environments

Semantic Scholar: https://api.semanticscholar.org/CorpusID:237048119

Reference link

Davydov V., Skrynnik A., Yakovlev K., Panov A. (2021) Q-Mixing Network for Multi-agent Pathfinding in Partially Observable Grid Environments. In: Kovalev S.M., Kuznetsov S.O., Panov A.I. (eds) Artificial Intelligence. RCAI 2021. Lecture Notes in Computer Science, vol 12948. Springer, Cham. Pages 169-179