REinforcement LEarning in Non-stationary environmenTs (RELEvaNT)
Type: National Project Project
Duration: from 2022 Jan 01 to 2024 Jan 31
Financed by: FCT
Prime Contractor: R - INESC-ID Lisboa (Other) - Lisboa, Portugal
Reinforcement learning (RL), both classical algorithms and their deep variants, critically rely on the fact that the underlying system is assumed stationary, i.e., the way that the environment responds to the actions of the agent does not vary with time. Unfortunately, many real-world problems fail to exhibit such stationary property. Therefore, in order to bring out the full potential of RL in complex application domains, existing methods must be extended to cope with non-stationarity in a principled way. RELEvaNT will investigate new models and methods for efficient deep RL in non-stationary environments and the potential applications on several "human-centered" domains. In particular, RELEvaNT will investigate - Model-based RL in which the learned model captures a low-dimensional factorized representation of the world. We will investigate the extent to which such low-dimensional representations enable the agent to robustly cope with changes in the dynamics of the world. - Meta-learning approaches to model-based RL, in order to render the process of learning the low-dimensional models mentioned above more data-efficient. In particular, we build on existing frameworks for model-agnostic meta-learning to construct pre-trained "prototypical" representations that can then be adjusted, at interaction time, using a small number of samples, thus making the agent effectively able to adjust to changes in the environment. The outcomes of the project will be evaluated in a number of real world non-stationary domains, exploiting the application of RL in robot control and human-robot interaction.
Partnerships
- R - INESC-ID Lisboa (Other) - Lisboa, Portugal