Senior Research Associate at the University of Bristol
One of the promising approaches to improve the robustness and safety of reinforcement learning (RL) is collecting human feedback and, that way, incorporating prior knowledge of the target environment. However, human feedback can be inconsistent and infrequent. In this proposed research visit, we explore approaches to cope with such inconsistency and infrequency by explicitly estimating these uncertainties. More precisely, we aim to incorporate the uncertainty in the estimated uncertainties (level-2 uncertainties) to build more robust and safe reinforcement learning with human feedback. Also, we consider how to apply this approach to other robust and safe reinforcement learning approaches, such as offline RL to enhance robustness and safety. Overall, we propose to build robust and safe reinforcement learning algorithms by employing a higher level of uncertainty estimation for human feedback.