Probabilistic Subgoal Representations for Hierarchical Reinforcement learning
Wang, Vivienne Huiling; Wang, Tinghuai; Yang, Wenyan; Kämäräinen, Joni Kristian; Pajarinen, Joni (2024)
Wang, Vivienne Huiling
Wang, Tinghuai
Yang, Wenyan
Kämäräinen, Joni Kristian
Pajarinen, Joni
2024
2122
This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:tuni-202502052008
https://urn.fi/URN:NBN:fi:tuni-202502052008
Kuvaus
Peer reviewed
Tiivistelmä
<p>In goal-conditioned hierarchical reinforcement learning (HRL), a high-level policy specifies a subgoal for the low-level policy to reach. Effective HRL hinges on a suitable subgoal representation function, abstracting state space into latent subgoal space and inducing varied low-level behaviors. Existing methods adopt a subgoal representation that provides a deterministic mapping from state space to latent subgoal space. Instead, this paper utilizes Gaussian Processes (GPs) for the first probabilistic subgoal representation. Our method employs a GP prior on the latent subgoal space to learn a posterior distribution over the subgoal representation functions while exploiting the long-range correlation in the state space through learnable kernels. This enables an adaptive memory that integrates long-range subgoal information from prior planning steps allowing to cope with stochastic uncertainties. Furthermore, we propose a novel learning objective to facilitate the simultaneous learning of probabilistic subgoal representations and policies within a unified framework. In experiments, our approach outperforms state-of-the-art baselines in standard benchmarks but also in environments with stochastic elements and under diverse reward conditions. Additionally, our model shows promising capabilities in transferring low-level policies across different tasks.</p>
Kokoelmat
- TUNICRIS-julkaisut [20161]