M3PO: Massively Multi-Task Model-Based Policy Optimization

Авторы

Макаров Д. А. Панов А. И.

Аннотация

We introduce Massively Multi-Task Model-Based Policy Optimization (M3PO), a scalable model-based reinforcement learning (MBRL) framework designed to address sample inefficiency in single-task settings and poor generalization in multi-task domains. Existing model-based approaches like DreamerV3 rely on pixel-level generative models that neglect control-centric representations, while model-free methods such as PPO suffer from high sample complexity and weak exploration. M3PO integrates an implicit world model, trained to predict task outcomes without observation reconstruction, with a hybrid exploration strategy that combines model-based planning and model-free uncertainty-driven bonuses. This eliminates the bias-variance trade-off in prior methods by using discrepancies between model-based and model-free value estimates to guide exploration, while maintaining stable policy updates through a trust-region optimizer. M3PO provides an efficient and robust alternative to existing model-based policy optimization approaches and achieves state-of-the-art performance across multiple benchmarks.

Внешние ссылки

DOI: 10.48550/arXiv.2506.21782

Скачать PDF статьи на arXiv.org (англ.): https://arxiv.org/abs/2506.21782

Ссылка при цитировании

Aditya Narendra, Dmitry Makarov, Aleksandr Panov. M3PO: Massively Multi-Task Model-Based Policy Optimization // arXiv:2506.21782v1 [cs.LG], 26 Jun 2025.