Campuses:

Bridging Model-based Robust Control and Model-free Reinforcement Learning

Sunday, April 26, 2020 - 9:30am - 10:00am
Keller 3-180
Bin Hu (University of Illinois at Urbana-Champaign)
The design of modern intelligent systems relies heavily on techniques developed in the control and machine learning communities. On one hand, model-based robust control techniques are crucial for safety-critical systems. On the other hand, model-free reinforcement learning (RL) techniques have achieved impressive performance on a variety of artificial intelligence tasks. The developments of next-generation intelligent systems such as self-driving cars, advanced robotics, and smart buildings require leveraging these control and learning techniques in an efficient and safe manner. This talk will focus on fundamental connections between model-based robust control and model-free reinforcement learning. In the first half of the talk, we will give a robust control perspective on data-driven RL algorithms. We will present a unified robust control framework for analysis of RL algorithms including temporal difference learning and Q-learning. In the second half of the talk, we will discuss some new performance guarantees of policy-based reinforcement learning methods on standard robust control tasks. In particular, we will present a policy optimization perspective for the mixed H2/H-infinity state feedback control problem. We show that the natural policy gradient method can efficiently learn the global solution of such robust control problem, despite the lack of convexity/coercivity.