Skill Fusion in Hybrid Robotic Framework for Visual Object Goal Navigation

Авторы

Панов А. И. Яковлев К. С. Староверов А. В. Муравьёв К. Ф.

Аннотация

In recent years, Embodied AI has become one of the main topics in robotics. For the agent to operate in human-centric environments, it needs the ability to explore previously unseen areas and to navigate to objects that humans want the agent to interact with. This task, which can be formulated as ObjectGoal Navigation (ObjectNav), is the main focus of this work. To solve this challenging problem, we suggest a hybrid framework consisting of both not-learnable and learnable modules and a switcher between them—SkillFusion. The former are more accurate, while the latter are more robust to sensors’ noise. To mitigate the sim-to-real gap, which often arises with learnable methods, we suggest training them in such a way that they are less environment-dependent. As a result, our method showed top results in both the Habitat simulator and during the evaluations on a real robot.

Внешние ссылки

DOI: 10.3390/robotics12040104

Читать онлайн на сайте издательства MDPI (англ.): https://www.mdpi.com/2218-6581/12/4/104

Смотреть презентацию на канале Центра когнитивного моделирования МФТИ (англ.):

Ссылка при цитировании

Staroverov, A.; Muravyev, K.; Yakovlev, K.; Panov, A. I. Skill Fusion in Hybrid Robotic Framework for Visual Object Goal Navigation // Robotics 2023, 12, 104. https://doi.org/10.3390/robotics12040104